.NET使用FastDBF读写DBF
程序员文章站
2022-06-11 12:17:36
FastDBF源代码地址:https://github.com/SocialExplorer/FastDBF 第一步在解决方案中新建一个类库的项目:取名为SocialExplorer.FastDBF 第二步:引入FASTDBF的源文件 源代码可以通过github地址下载引入 源文件:DbfColum ......
fastdbf源代码地址:https://github.com/socialexplorer/fastdbf
第一步在解决方案中新建一个类库的项目:取名为socialexplorer.fastdbf
第二步:引入fastdbf的源文件 源代码可以通过github地址下载引入
源文件:dbfcolumn.cs
/// /// author: ahmed lacevic /// date: 12/1/2007 /// /// revision history: /// ----------------------------------- /// author: /// date: /// desc: using system; using system.collections.generic; using system.text; namespace socialexplorer.io.fastdbf { /// <summary> /// this class represents a dbf column. /// </summary> /// /// <remarks> /// note that certain properties can not be modified after creation of the object. /// this is because we are locking the header object after creation of a data row, /// and columns are part of the header so either we have to have a lock field for each column, /// or make it so that certain properties such as length can only be set during creation of a column. /// otherwise a user of this object could modify a column that belongs to a locked header and thus corrupt the dbf file. /// </remarks> public class dbfcolumn : icloneable { /* (foxpro/foxbase) double integer *not* a memo field g general (dbase v: like memo) ole objects in ms windows versions p picture (foxpro) like memo fields, but not for text processing. y currency (foxpro) t datetime (foxpro) i integer length: 4 byte little endian integer (foxpro) */ /// <summary> /// great information on dbf located here: /// http://www.clicketyclick.dk/databases/xbase/format/data_types.html /// http://www.clicketyclick.dk/databases/xbase/format/dbf.html /// </summary> public enum dbfcolumntype { /// <summary> /// character less than 254 length /// ascii text less than 254 characters long in dbase. /// /// character fields can be up to 32 kb long (in clipper and foxpro) using decimal /// count as high byte in field length. it's possible to use up to 64kb long fields /// by reading length as unsigned. /// /// </summary> character = 0, /// <summary> /// number length: less than 18 /// ascii text up till 18 characters long (include sign and decimal point). /// /// valid characters: /// "0" - "9" and "-". number fields can be up to 20 characters long in foxpro and clipper. /// </summary> /// <remarks> /// we are not enforcing this 18 char limit. /// </remarks> number = 1, /// <summary> /// l logical length: 1 boolean/byte (8 bit) /// /// legal values: /// ? not initialised (default) /// y,y yes /// n,n no /// f,f false /// t,t true /// logical fields are always displayed using t/f/?. some sources claims /// that space (ascii 20h) is valid for not initialised. space may occur, but is not defined. /// </summary> boolean = 2, /// <summary> /// d date length: 8 date in format yyyymmdd. a date like 0000-00- 00 is *not* valid. /// </summary> date = 3, /// <summary> /// m memo length: 10 pointer to ascii text field in memo file 10 digits representing a pointer to a dbt block (default is blanks). /// </summary> memo = 4, /// <summary> /// b binary (dbase v) like memo fields, but not for text processing. /// </summary> binary = 5, /// <summary> /// i integer length: 4 byte little endian integer (foxpro) /// </summary> integer = 6, } /// <summary> /// column (field) name /// </summary> private string mname; /// <summary> /// field type (char, number, boolean, date, memo, binary) /// </summary> private dbfcolumntype mtype; /// <summary> /// offset from the start of the record /// </summary> internal int mdataaddress; /// <summary> /// length of the data in bytes; some rules apply which are in the spec (read more above). /// </summary> private int mlength; /// <summary> /// decimal precision count, or number of digits afer decimal point. this applies to number types only. /// </summary> private int mdecimalcount; /// <summary> /// full spec constructor sets all relevant fields. /// </summary> /// <param name="sname"></param> /// <param name="type"></param> /// <param name="nlength"></param> /// <param name="ndecimals"></param> public dbfcolumn(string sname, dbfcolumntype type, int nlength, int ndecimals) { name = sname; mtype = type; mlength = nlength; if(type == dbfcolumntype.number) mdecimalcount = ndecimals; else mdecimalcount = 0; //perform some simple integrity checks... //------------------------------------------- //decimal precision: //we could also fix the length property with a statement like this: mlength = mdecimalcount + 2; //lyq修改源码取消判断 //if (mdecimalcount > 0 && mlength - mdecimalcount <= 1) // throw new exception("decimal precision can not be larger than the length of the field."); if(mtype == dbfcolumntype.integer) mlength = 4; if(mtype == dbfcolumntype.binary) mlength = 1; if(mtype == dbfcolumntype.date) mlength = 8; //dates are exactly yyyymmdd if(mtype == dbfcolumntype.memo) mlength = 10; //length: 10 pointer to ascii text field in memo file. pointer to a dbt block. if(mtype == dbfcolumntype.boolean) mlength = 1; //field length: if (mlength <= 0) throw new exception("invalid field length specified. field length can not be zero or less than zero."); else if (type != dbfcolumntype.character && type != dbfcolumntype.binary && mlength > 255) throw new exception("invalid field length specified. for numbers it should be within 20 digits, but we allow up to 255. for char and binary types, length up to 65,535 is allowed. for maximum compatibility use up to 255."); else if((type == dbfcolumntype.character || type == dbfcolumntype.binary) && mlength > 65535) throw new exception("invalid field length specified. for char and binary types, length up to 65535 is supported. for maximum compatibility use up to 255."); } /// <summary> /// create a new column fully specifying all properties. /// </summary> /// <param name="sname">column name</param> /// <param name="type">type of field</param> /// <param name="nlength">field length including decimal places and decimal point if any</param> /// <param name="ndecimals">decimal places</param> /// <param name="ndataaddress">offset from start of record</param> internal dbfcolumn(string sname, dbfcolumntype type, int nlength, int ndecimals, int ndataaddress): this(sname, type, nlength, ndecimals) { mdataaddress = ndataaddress; } public dbfcolumn(string sname, dbfcolumntype type): this(sname, type, 0, 0) { if(type == dbfcolumntype.number || type == dbfcolumntype.character ) throw new exception("for number and character field types you must specify length and decimal precision."); } /// <summary> /// field name. /// </summary> public string name { get { return mname; } set { //name: if (string.isnullorempty(value)) throw new exception("field names must be at least one char long and can not be null."); if (value.length > 11) throw new exception("field names can not be longer than 11 chars."); mname = value; } } /// <summary> /// field type (c n l d or m). /// </summary> public dbfcolumntype columntype { get { return mtype; } } /// <summary> /// returns column type as a char, (as written in the dbf column header) /// n=number, c=char, b=binary, l=boolean, d=date, i=integer, m=memo /// </summary> public char columntypechar { get { switch(mtype) { case dbfcolumntype.number: return 'n'; case dbfcolumntype.character: return 'c'; case dbfcolumntype.binary: return 'b'; case dbfcolumntype.boolean: return 'l'; case dbfcolumntype.date: return 'd'; case dbfcolumntype.integer: return 'i'; case dbfcolumntype.memo: return 'm'; } throw new exception("unrecognized field type!"); } } /// <summary> /// field data address offset from the start of the record. /// </summary> public int dataaddress { get { return mdataaddress; } } /// <summary> /// length of the data in bytes. /// </summary> public int length { get { return mlength; } } /// <summary> /// field decimal count in binary, indicating where the decimal is. /// </summary> public int decimalcount { get { return mdecimalcount; } } /// <summary> /// returns corresponding dbf field type given a .net type. /// </summary> /// <param name="type"></param> /// <returns></returns> public static dbfcolumntype getdbasetype(type type) { if (type == typeof(string)) return dbfcolumntype.character; else if (type == typeof(double) || type == typeof(float)) return dbfcolumntype.number; else if (type == typeof(bool)) return dbfcolumntype.boolean; else if (type == typeof(datetime)) return dbfcolumntype.date; throw new notsupportedexception(string.format("{0} does not have a corresponding dbase type.", type.name)); } public static dbfcolumntype getdbasetype(char c) { switch(c.tostring().toupper()) { case "c": return dbfcolumntype.character; case "n": return dbfcolumntype.number; case "b": return dbfcolumntype.binary; case "l": return dbfcolumntype.boolean; case "d": return dbfcolumntype.date; case "i": return dbfcolumntype.integer; case "m": return dbfcolumntype.memo; } throw new notsupportedexception(string.format("{0} does not have a corresponding dbase type.", c)); } /// <summary> /// returns shp file shape field. /// </summary> /// <returns></returns> public static dbfcolumn shapefield() { return new dbfcolumn("geometry", dbfcolumntype.binary); } /// <summary> /// returns shp file id field. /// </summary> /// <returns></returns> public static dbfcolumn idfield() { return new dbfcolumn("row", dbfcolumntype.integer); } public object clone() { return this.memberwiseclone(); } } }
源文件:dbfdatatruncateexception.cs
using system; using system.collections.generic; using system.text; using system.runtime.serialization; namespace socialexplorer.io.fastdbf { public class dbfdatatruncateexception: exception { public dbfdatatruncateexception(string smessage): base(smessage) { } public dbfdatatruncateexception(string smessage, exception innerexception) : base(smessage, innerexception) { } public dbfdatatruncateexception(serializationinfo info, streamingcontext context) : base(info, context) { } } }
源文件:dbffile.cs
/// /// author: ahmed lacevic /// date: 12/1/2007 /// desc: this class represents a dbf file. you can create, open, update and save dbf files using this class and supporting classes. /// also, this class supports reading/writing from/to an internet forward only type of stream! /// /// revision history: /// ----------------------------------- /// author: /// date: /// desc: using system; using system.collections.generic; using system.text; using system.io; namespace socialexplorer.io.fastdbf { /// <summary> /// this class represents a dbf file. you can create new, open, update and save dbf files using this class and supporting classes. /// also, this class supports reading/writing from/to an internet forward only type of stream! /// </summary> /// <remarks> /// todo: add end of file byte '0x1a' !!! /// we don't relly on that byte at all, and everything works with or without that byte, but it should be there by spec. /// </remarks> public class dbffile { /// <summary> /// helps read/write dbf file header information. /// </summary> protected dbfheader mheader; /// <summary> /// flag that indicates whether the header was written or not... /// </summary> protected bool mheaderwritten = false; /// <summary> /// streams to read and write to the dbf file. /// </summary> protected stream mdbffile = null; protected binaryreader mdbffilereader = null; protected binarywriter mdbffilewriter = null; private encoding encoding = encoding.ascii; /// <summary> /// file that was opened, if one was opened at all. /// </summary> protected string mfilename = ""; /// <summary> /// number of records read using readnext() methods only. this applies only when we are using a forward-only stream. /// mrecordsreadcount is used to keep track of record index. with a seek enabled stream, /// we can always calculate index using stream position. /// </summary> protected int mrecordsreadcount = 0; /// <summary> /// keep these values handy so we don't call functions on every read. /// </summary> protected bool misforwardonly = false; protected bool misreadonly = false; [obsolete] public dbffile() : this(encoding.ascii) { } public dbffile(encoding encoding) { this.encoding = encoding; mheader = new dbfheader(encoding); } /// <summary> /// open a dbf from a filestream. this can be a file or an internet connection stream. make sure that it is positioned at start of dbf file. /// reading a dbf over the internet we can not determine size of the file, so we support hasmore(), readnext() interface. /// recordcount information in header can not be trusted always, since some packages store 0 there. /// </summary> /// <param name="ofs"></param> public void open(stream ofs) { if (mdbffile != null) close(); mdbffile = ofs; mdbffilereader = null; mdbffilewriter = null; if (mdbffile.canread) mdbffilereader = new binaryreader(mdbffile, encoding); if (mdbffile.canwrite) mdbffilewriter = new binarywriter(mdbffile, encoding); //reset position mrecordsreadcount = 0; //assume header is not written mheaderwritten = false; //read the header if (ofs.canread) { //try to read the header... try { mheader.read(mdbffilereader); mheaderwritten = true; } catch (endofstreamexception) { //could not read header, file is empty mheader = new dbfheader(encoding); mheaderwritten = false; } } if (mdbffile != null) { misreadonly = !mdbffile.canwrite; misforwardonly = !mdbffile.canseek; } } /// <summary> /// open a dbf file or create a new one. /// </summary> /// <param name="spath">full path to the file.</param> /// <param name="mode"></param> public void open(string spath, filemode mode, fileaccess access, fileshare share) { mfilename = spath; open(file.open(spath, mode, access, share)); } /// <summary> /// open a dbf file or create a new one. /// </summary> /// <param name="spath">full path to the file.</param> /// <param name="mode"></param> public void open(string spath, filemode mode, fileaccess access) { mfilename = spath; open(file.open(spath, mode, access)); } /// <summary> /// open a dbf file or create a new one. /// </summary> /// <param name="spath">full path to the file.</param> /// <param name="mode"></param> public void open(string spath, filemode mode) { mfilename = spath; open(file.open(spath, mode)); } /// <summary> /// creates a new dbf 4 file. overwrites if file exists! use open() function for more options. /// </summary> /// <param name="spath"></param> public void create(string spath) { open(spath, filemode.create, fileaccess.readwrite); mheaderwritten = false; } /// <summary> /// update header info, flush buffers and close streams. you should always call this method when you are done with a dbf file. /// </summary> public void close() { //try to update the header if it has changed //------------------------------------------ if (mheader.isdirty) writeheader(); //empty header... //-------------------------------- mheader = new dbfheader(encoding); mheaderwritten = false; //reset current record index //-------------------------------- mrecordsreadcount = 0; //close streams... //-------------------------------- if (mdbffilewriter != null) { mdbffilewriter.flush(); mdbffilewriter.close(); } if (mdbffilereader != null) mdbffilereader.close(); if (mdbffile != null) mdbffile.close(); //set streams to null //-------------------------------- mdbffilereader = null; mdbffilewriter = null; mdbffile = null; mfilename = ""; } /// <summary> /// returns true if we can not write to the dbf file stream. /// </summary> public bool isreadonly { get { return misreadonly; /* if (mdbffile != null) return !mdbffile.canwrite; return true; */ } } /// <summary> /// returns true if we can not seek to different locations within the file, such as internet connections. /// </summary> public bool isforwardonly { get { return misforwardonly; /* if(mdbffile!=null) return !mdbffile.canseek; return false; */ } } /// <summary> /// returns the name of the filestream. /// </summary> public string filename { get { return mfilename; } } /// <summary> /// read next record and fill data into parameter ofillrecord. returns true if a record was read, otherwise false. /// </summary> /// <param name="ofillrecord"></param> /// <returns></returns> public bool readnext(dbfrecord ofillrecord) { //check if we can fill this record with data. it must match record size specified by header and number of columns. //we are not checking whether it comes from another dbf file or not, we just need the same structure. allow flexibility but be safe. if (ofillrecord.header != mheader && (ofillrecord.header.columncount != mheader.columncount || ofillrecord.header.recordlength != mheader.recordlength)) throw new exception("record parameter does not have the same size and number of columns as the " + "header specifies, so we are unable to read a record into ofillrecord. " + "this is a programming error, have you mixed up dbf file objects?"); //dbf file reader can be null if stream is not readable... if (mdbffilereader == null) throw new exception("read stream is null, either you have opened a stream that can not be " + "read from (a write-only stream) or you have not opened a stream at all."); //read next record... bool bread = ofillrecord.read(mdbffile); if (bread) { if (misforwardonly) { //zero based index! set before incrementing count. ofillrecord.recordindex = mrecordsreadcount; mrecordsreadcount++; } else ofillrecord.recordindex = ((int)((mdbffile.position - mheader.headerlength) / mheader.recordlength)) - 1; } return bread; } /// <summary> /// tries to read a record and returns a new record object or null if nothing was read. /// </summary> /// <returns></returns> public dbfrecord readnext() { //create a new record and fill it. dbfrecord orec = new dbfrecord(mheader); return readnext(orec) ? orec : null; } /// <summary> /// reads a record specified by index into ofillrecord object. you can use this method /// to read in and process records without creating and discarding record objects. /// note that you should check that your stream is not forward-only! if you have a forward only stream, use readnext() functions. /// </summary> /// <param name="index">zero based record index.</param> /// <param name="ofillrecord">record object to fill, must have same size and number of fields as thid dbf file header!</param> /// <remarks> /// <returns>true if read a record was read, otherwise false. if you read end of file false will be returned and ofillrecord will not be modified!</returns> /// the parameter record (ofillrecord) must match record size specified by the header and number of columns as well. /// it does not have to come from the same header, but it must match the structure. we are not going as far as to check size of each field. /// the idea is to be flexible but safe. it's a fine balance, these two are almost always at odds. /// </remarks> public bool read(int index, dbfrecord ofillrecord) { //check if we can fill this record with data. it must match record size specified by header and number of columns. //we are not checking whether it comes from another dbf file or not, we just need the same structure. allow flexibility but be safe. if (ofillrecord.header != mheader && (ofillrecord.header.columncount != mheader.columncount || ofillrecord.header.recordlength != mheader.recordlength)) throw new exception("record parameter does not have the same size and number of columns as the " + "header specifies, so we are unable to read a record into ofillrecord. " + "this is a programming error, have you mixed up dbf file objects?"); //dbf file reader can be null if stream is not readable... if (mdbffilereader == null) throw new exception("readstream is null, either you have opened a stream that can not be " + "read from (a write-only stream) or you have not opened a stream at all."); //move to the specified record, note that an exception will be thrown is stream is not seekable! //this is ok, since we provide a function to check whether the stream is seekable. long nseektoposition = mheader.headerlength + (index * mheader.recordlength); //check whether requested record exists. subtract 1 from file length (there is a terminating character 1a at the end of the file) //so if we hit end of file, there are no more records, so return false; if (index < 0 || mdbffile.length - 1 <= nseektoposition) return false; //move to record and read mdbffile.seek(nseektoposition, seekorigin.begin); //read the record bool bread = ofillrecord.read(mdbffile); if (bread) ofillrecord.recordindex = index; return bread; } public bool readvalue(int rowindex, int columnindex, out string result) { result = string.empty; dbfcolumn ocol = mheader[columnindex]; //move to the specified record, note that an exception will be thrown is stream is not seekable! //this is ok, since we provide a function to check whether the stream is seekable. long nseektoposition = mheader.headerlength + (rowindex * mheader.recordlength) + ocol.dataaddress; //check whether requested record exists. subtract 1 from file length (there is a terminating character 1a at the end of the file) //so if we hit end of file, there are no more records, so return false; if (rowindex < 0 || mdbffile.length - 1 <= nseektoposition) return false; //move to position and read mdbffile.seek(nseektoposition, seekorigin.begin); //read the value byte[] data = new byte[ocol.length]; mdbffile.read(data, 0, ocol.length); result = new string(encoding.getchars(data, 0, ocol.length)); return true; } /// <summary> /// reads a record specified by index. this method requires the stream to be able to seek to position. /// if you are using a http stream, or a stream that can not stream, use readnext() methods to read in all records. /// </summary> /// <param name="index">zero based index.</param> /// <returns>null if record can not be read, otherwise returns a new record.</returns> public dbfrecord read(int index) { //create a new record and fill it. dbfrecord orec = new dbfrecord(mheader); return read(index, orec) ? orec : null; } /// <summary> /// write a record to file. if recordindex is present, record will be updated, otherwise a new record will be written. /// header will be output first if this is the first record being writen to file. /// this method does not require stream seek capability to add a new record. /// </summary> /// <param name="orec"></param> public void write(dbfrecord orec) { //if header was never written, write it first, then output the record if (!mheaderwritten) writeheader(); //if this is a new record (recordindex should be -1 in that case) if (orec.recordindex < 0) { if (mdbffilewriter.basestream.canseek) { //calculate number of records in file. do not rely on header's recordcount property since client can change that value. //also note that some dbf files do not have ending 0x1a byte, so we subtract 1 and round off //instead of just cast since cast would just drop decimals. int nnumrecords = (int)math.round(((double)(mdbffile.length - mheader.headerlength - 1) / mheader.recordlength)); if (nnumrecords < 0) nnumrecords = 0; orec.recordindex = nnumrecords; update(orec); mheader.recordcount++; } else { //we can not position this stream, just write out the new record. orec.write(mdbffile); mheader.recordcount++; } } else update(orec); } public void write(dbfrecord orec, bool bclearrecordafterwrite) { write(orec); if (bclearrecordafterwrite) orec.clear(); } /// <summary> /// update a record. recordindex (zero based index) must be more than -1, otherwise an exception is thrown. /// you can also use write method which updates a record if it has recordindex or adds a new one if recordindex == -1. /// recordindex is set automatically when you call any read() methods on this class. /// </summary> /// <param name="orec"></param> public void update(dbfrecord orec) { //if header was never written, write it first, then output the record if (!mheaderwritten) writeheader(); //check if record has an index if (orec.recordindex < 0) throw new exception("recordindex is not set, unable to update record. set recordindex or call write() method to add a new record to file."); //check if this record matches record size specified by header and number of columns. //client can pass a record from another dbf that is incompatible with this one and that would corrupt the file. if (orec.header != mheader && (orec.header.columncount != mheader.columncount || orec.header.recordlength != mheader.recordlength)) throw new exception("record parameter does not have the same size and number of columns as the " + "header specifies. writing this record would corrupt the dbf file. " + "this is a programming error, have you mixed up dbf file objects?"); //dbf file writer can be null if stream is not writable to... if (mdbffilewriter == null) throw new exception("write stream is null. either you have opened a stream that can not be " + "writen to (a read-only stream) or you have not opened a stream at all."); //move to the specified record, note that an exception will be thrown if stream is not seekable! //this is ok, since we provide a function to check whether the stream is seekable. long nseektoposition = (long)mheader.headerlength + (long)((long)orec.recordindex * (long)mheader.recordlength); //check whether we can seek to this position. subtract 1 from file length (there is a terminating character 1a at the end of the file) //so if we hit end of file, there are no more records, so return false; if (mdbffile.length < nseektoposition) throw new exception("invalid record position. unable to save record."); //move to record start mdbffile.seek(nseektoposition, seekorigin.begin); //write orec.write(mdbffile); } /// <summary> /// save header to file. normally, you do not have to call this method, header is saved /// automatically and updated when you close the file (if it changed). /// </summary> public bool writeheader() { //update header if possible //-------------------------------- if (mdbffilewriter != null) { if (mdbffilewriter.basestream.canseek) { mdbffilewriter.seek(0, seekorigin.begin); mheader.write(mdbffilewriter); mheaderwritten = true; return true; } else { //if stream can not seek, then just write it out and that's it. if (!mheaderwritten) mheader.write(mdbffilewriter); mheaderwritten = true; } } return false; } /// <summary> /// access dbf header with information on columns. use this object for faster access to header. /// remove one layer of function calls by saving header reference and using it directly to access columns. /// </summary> public dbfheader header { get { return mheader; } } } }
源文件:dbfheader.cs
/// /// author: ahmed lacevic /// date: 12/1/2007 /// desc: /// /// revision history: /// ----------------------------------- /// author: /// date: /// desc: using system; using system.collections.generic; using system.text; using system.io; namespace socialexplorer.io.fastdbf { /// <summary> /// this class represents a dbf iv file header. /// </summary> /// /// <remarks> /// dbf files are really wasteful on space but this legacy format lives on because it's really really simple. /// it lacks much in features though. /// /// /// thanks to erik bachmann for providing the dbf file structure information!! /// http://www.clicketyclick.dk/databases/xbase/format/dbf.html /// /// _______________________ _______ /// 00h / 0| version number *1| ^ /// |-----------------------| | /// 01h / 1| date of last update | | /// 02h / 2| yymmdd *21| | /// 03h / 3| *14| | /// |-----------------------| | /// 04h / 4| number of records | record /// 05h / 5| in data file | header /// 06h / 6| ( 32 bits ) *14| | /// 07h / 7| | | /// |-----------------------| | /// 08h / 8| length of header *14| | /// 09h / 9| structure ( 16 bits ) | | /// |-----------------------| | /// 0ah / 10| length of each record | | /// 0bh / 11| ( 16 bits ) *2 *14| | /// |-----------------------| | /// 0ch / 12| ( reserved ) *3| | /// 0dh / 13| | | /// |-----------------------| | /// 0eh / 14| incomplete transac.*12| | /// |-----------------------| | /// 0fh / 15| encryption flag *13| | /// |-----------------------| | /// 10h / 16| free record thread | | /// 11h / 17| (reserved for lan | | /// 12h / 18| only ) | | /// 13h / 19| | | /// |-----------------------| | /// 14h / 20| ( reserved for | | _ |=======================| ______ /// | multi-user dbase ) | | / 00h / 0| field name in ascii | ^ /// : ( dbase iii+ - ) : | / : (terminated by 00h) : | /// : : | | | | | /// 1bh / 27| | | | 0ah / 10| | | /// |-----------------------| | | |-----------------------| for /// 1ch / 28| mdx flag (dbase iv)*14| | | 0bh / 11| field type (ascii) *20| each /// |-----------------------| | | |-----------------------| field /// 1dh / 29| language driver *5| | / 0ch / 12| field data address | | /// |-----------------------| | / | *6| | /// 1eh / 30| ( reserved ) | | / | (in memory !!!) | | /// 1fh / 31| *3| | / 0fh / 15| (dbase iii+) | | /// |=======================|__|____/ |-----------------------| | - /// 20h / 32| | | ^ 10h / 16| field length *22| | | /// |- - - - - - - - - - - -| | | |-----------------------| | | *7 /// | *19| | | 11h / 17| decimal count *23| | | /// |- - - - - - - - - - - -| | field |-----------------------| | - /// | | | descriptor 12h / 18| ( reserved for | | /// :. . . . . . . . . . . .: | |array 13h / 19| multi-user dbase)*18| | /// : : | | |-----------------------| | /// n | |__|__v_ 14h / 20| work area id *16| | /// |-----------------------| | \ |-----------------------| | /// n+1| terminator (0dh) | | \ 15h / 21| ( reserved for | | /// |=======================| | \ 16h / 22| multi-user dbase ) | | /// m | database container | | \ |-----------------------| | /// : *15: | \ 17h / 23| flag for set fields | | /// : : | | |-----------------------| | /// / m+263 | | | 18h / 24| ( reserved ) | | /// |=======================|__v_ ___ | : : | /// : : ^ | : : | /// : : | | : : | /// : : | | 1eh / 30| | | /// | record structure | | | |-----------------------| | /// | | | \ 1fh / 31| index field flag *8| | /// | | | \_ |=======================| _v_____ /// | | records /// |-----------------------| | /// | | | _ |=======================| _______ /// | | | / 00h / 0| record deleted flag *9| ^ /// | | | / |-----------------------| | /// | | | / | data *10| one /// | | | / : (ascii) *17: record /// | |____|_____/ | | | /// : : | | | _v_____ /// : :____|_____ |=======================| /// : : | /// | | | /// | | | /// | | | /// | | | /// | | | /// |=======================| | /// |__end_of_file__________| ___v____ end of file ( 1ah ) *11 /// /// </remarks> public class dbfheader : icloneable { /// <summary> /// header file descriptor size is 33 bytes (32 bytes + 1 terminator byte), followed by column metadata which is 32 bytes each. /// </summary> public const int filedescriptorsize = 33; /// <summary> /// field or dbf column descriptor is 32 bytes long. /// </summary> public const int columndescriptorsize = 32; //type of the file, must be 03h private const int mfiletype = 0x03; //date the file was last updated. private datetime mupdatedate; //number of records in the datafile, 32bit little-endian, unsigned private uint mnumrecords = 0; //length of the header structure private ushort mheaderlength = filedescriptorsize; //empty header is 33 bytes long. each column adds 32 bytes. //length of the records, ushort - unsigned 16 bit integer private int mrecordlength = 1; //start with 1 because the first byte is a delete flag //dbf fields/columns internal list<dbfcolumn> mfields = new list<dbfcolumn>(); //indicates whether header columns can be modified! bool mlocked = false; //keeps column name index for the header, must clear when header columns change. private dictionary<string, int> mcolumnnameindex = null; /// <summary> /// when object is modified dirty flag is set. /// </summary> bool misdirty = false; /// <summary> /// memptyrecord is an array used to clear record data in cdbf4record. /// this is shared by all record objects, used to speed up clearing fields or entire record. /// <seealso cref="emptydatarecord"/> /// </summary> private byte[] memptyrecord = null; public readonly encoding encoding = encoding.ascii; [obsolete] public dbfheader() { } public dbfheader(encoding encoding) { this.encoding = encoding; } /// <summary> /// specify initial column capacity. /// </summary> /// <param name="ninitialfields"></param> public dbfheader(int nfieldcapacity) { mfields = new list<dbfcolumn>(nfieldcapacity); } /// <summary> /// gets header length. /// </summary> public ushort headerlength { get { return mheaderlength; } } /// <summary> /// add a new column to the dbf header. /// </summary> /// <param name="onewcol"></param> public void addcolumn(dbfcolumn onewcol) { //throw exception if the header is locked if (mlocked) throw new invalidoperationexception("this header is locked and can not be modified. modifying the header would result in a corrupt dbf file. you can unlock the header by calling unlock() method."); //since we are breaking the spec rules about max number of fields, we should at least //check that the record length stays within a number that can be recorded in the header! //we have 2 unsigned bytes for record length for a maximum of 65535. if (mrecordlength + onewcol.length > 65535) throw new argumentoutofrangeexception("onewcol", "unable to add new column. adding this column puts the record length over the maximum (which is 65535 bytes)."); //add the column mfields.add(onewcol); //update offset bits, record and header lengths onewcol.mdataaddress = mrecordlength; mrecordlength += onewcol.length; mheaderlength += columndescriptorsize; //clear empty record memptyrecord = null; //set dirty bit misdirty = true; mcolumnnameindex = null; } /// <summary> /// create and add a new column with specified name and type. /// </summary> /// <param name="sname"></param> /// <param name="type"></param> public void addcolumn(string sname, dbfcolumn.dbfcolumntype type) { addcolumn(new dbfcolumn(sname, type)); } /// <summary> /// create and add a new column with specified name, type, length, and decimal precision. /// </summary> /// <param name="sname">field name. uniqueness is not enforced.</param> /// <param name="type"></param> /// <param name="nlength">length of the field including decimal point and decimal numbers</param> /// <param name="ndecimals">number of decimal places to keep.</param> public void addcolumn(string sname, dbfcolumn.dbfcolumntype type, int nlength, int ndecimals) { addcolumn(new dbfcolumn(sname, type, nlength, ndecimals)); } /// <summary> /// remove column from header definition. /// </summary> /// <param name="nindex"></param> public void removecolumn(int nindex) { //throw exception if the header is locked if (mlocked) throw new invalidoperationexception("this header is locked and can not be modified. modifying the header would result in a corrupt dbf file. you can unlock the header by calling unlock() method."); dbfcolumn ocolremove = mfields[nindex]; mfields.removeat(nindex); ocolremove.mdataaddress = 0; mrecordlength -= ocolremove.length; mheaderlength -= columndescriptorsize; //if you remove a column offset shift for each of the columns //following the one removed, we need to update those offsets. int nremovedcollen = ocolremove.length; for (int i = nindex; i < mfields.count; i++) mfields[i].mdataaddress -= nremovedcollen; //clear the empty record memptyrecord = null; //set dirty bit misdirty = true; mcolumnnameindex = null; } /// <summary> /// look up a column index by name. note that this is case sensitive, internally it does a lookup using a dictionary. /// </summary> /// <param name="sname"></param> public dbfcolumn this[string sname] { get { int colindex = findcolumn(sname); if (colindex > -1) return mfields[colindex]; return null; } } /// <summary> /// returns column at specified index. index is 0 based. /// </summary> /// <param name="nindex">zero based index.</param> /// <returns></returns> public dbfcolumn this[int nindex] { get { return mfields[nindex]; } } /// <summary> /// finds a column index by using a fast dictionary lookup-- creates column dictionary on first use. returns -1 if not found. note this is case sensitive! /// </summary> /// <param name="sname">column name</param> /// <returns>column index (0 based) or -1 if not found.</returns> public int findcolumn(string sname) { if (mcolumnnameindex == null) { mcolumnnameindex = new dictionary<string, int>(mfields.count); //create a new index for (int i = 0; i < mfields.count; i++) { mcolumnnameindex.add(mfields[i].name, i); } } int columnindex; if (mcolumnnameindex.trygetvalue(sname, out columnindex)) return columnindex; return -1; } /// <summary> /// returns an empty data record. this is used to clear columns /// </summary> /// <remarks> /// the reason we put this in the header class is because it allows us to use the cdbf4record class in two ways. /// 1. we can create one instance of the record and reuse it to write many records quickly clearing the data array by bitblting to it. /// 2. we can create many instances of the record (a collection of records) and have only one copy of this empty dataset for all of them. /// if we had put it in the record class then we would be taking up twice as much space unnecessarily. the empty record also fits the model /// and everything is neatly encapsulated and safe. /// /// </remarks> protected internal byte[] emptydatarecord { get { return memptyrecord ?? (memptyrecord = encoding.getbytes("".padleft(mrecordlength, ' ').tochararray())); } } /// <summary> /// returns number of columns in this dbf header. /// </summary> public int columncount { get { return mfields.count; } } /// <summary> /// size of one record in bytes. all fields + 1 byte delete flag. /// </summary> public int recordlength { get { return mrecordlength; } } /// <summary> /// get/set number of records in the dbf. /// </summary> /// <remarks> /// the reason we allow client to set recordcount is beause in certain streams /// like internet streams we can not update record count as we write out records, we have to set it in advance, /// so client has to be able to modify this property. /// </remarks> public uint recordcount { get { return mnumrecords; } set { mnumrecords = value; //set the dirty bit misdirty = true; } } /// <summary> /// get/set whether this header is read only or can be modified. when you create a cdbfrecord /// object and pass a header to it, cdbfrecord locks the header so that it can not be modified any longer. /// in order to preserve dbf integrity. /// </summary> internal bool locked { get { return mlocked; } set { mlocked = value; } } /// <summary> /// use this method with caution. headers are locked for a reason, to prevent dbf from becoming corrupt. /// </summary> public void unlock() { mlocked = false; } /// <summary> /// returns true when this object is modified after read or write. /// </summary> public bool isdirty { get { return misdirty; } set { misdirty = value; } } /// <summary> /// encoding must be ascii for this binary writer. /// </summary> /// <param name="writer"></param> /// <remarks> /// see class remarks for dbf file structure. /// </remarks> public void write(binarywriter writer) { //write the header // write the output file type. writer.write((byte)mfiletype); //update date format is yymmdd, which is different from the column date type (yyyyddmm) writer.write((byte)(mupdatedate.year - 1900)); writer.write((byte)mupdatedate.month); writer.write((byte)mupdatedate.day); // write the number of records in the datafile. (32 bit number, little-endian unsigned) writer.write(mnumrecords); // write the length of the header structure. writer.write(mheaderlength); // write the length of a record writer.write((ushort)mrecordlength); // write the reserved bytes in the header for (int i = 0; i < 20; i++) writer.write((byte)0); // write all of the header records byte[] bytereserved = new byte[14]; //these are initialized to 0 by default. foreach (dbfcolumn field in mfields) { //char[] cname = field.name.padright(11, (char)0).tochararray(); byte[] bname = encoding.getbytes(field.name); byte[] cname = new byte[11]; array.constrainedcopy(bname, 0, cname, 0, bname.length > 11 ? 11 : bname.length); writer.write(cname); // write the field type writer.write((char)field.columntypechar); // write the field data address, offset from the start of the record. writer.write(field.dataaddress); // write the length of the field. // if char field is longer than 255 bytes, then we use the decimal field as part of the field length. if (field.columntype == dbfcolumn.dbfcolumntype.character && field.length > 255) { //treat decimal count as high byte of field length, this extends char field max to 65535 writer.write((ushort)field.length); } else { // write the length of the field. writer.write((byte)field.length); // write the decimal count. writer.write((byte)field.decimalcount); } // write the reserved bytes. writer.write(bytereserved); } // write the end of the field definitions marker writer.write((byte)0x0d); writer.flush(); //clear dirty bit misdirty = false; //lock the header so it can not be modified any longer, //we could actually postpond this until first record is written! mlocked = true; } /// <summary> /// read header data, make sure the stream is positioned at the start of the file to read the header otherwise you will get an exception. /// when this function is done the position will be the first record. /// </summary> /// <param name="reader"></param> public void read(binaryreader reader) { // type of reader. int nfiletype = reader.readbyte(); if (nfiletype != 0x03) throw new notsupportedexception("unsupported dbf reader type " + nfiletype); // parse the update date information. int year = (int)reader.readbyte(); int month = (int)reader.readbyte(); int day = (int)reader.readbyte(); mupdatedate = new datetime(year + 1900, month, day); // read the number of records. mnumrecords = reader.readuint32(); // read the length of the header structure. mheaderlength = reader.readuint16(); // read the length of a record mrecordlength = reader.readint16(); // skip the reserved bytes in the header. reader.readbytes(20); // calculate the number of fields in the header int nnumfields = (mheaderlength - filedescriptorsize) / columndescriptorsize; //offset from start of record, start at 1 because that's the delete flag. int ndataoffset = 1; // read all of the header records mfields = new list<dbfcolumn>(nnumfields); for (int i = 0; i < nnumfields; i++) { // read the field name char[] buffer = new char[11]; buffer = reader.readchars(11); string sfieldname = new string(buffer); int nullpoint = sfieldname.indexof((char)0); if (nullpoint != -1) sfieldname = sfieldname.substring(0, nullpoint); //read the field type char cdbasetype = (char)reader.readbyte(); // read the field data address, offset from the start of the record. int nfielddataaddress = reader.readint32(); //read the field length in bytes //if field type is char, then read fieldlength and decimal count as one number to allow char fields to be //longer than 256 bytes (ascii char). this is the way clipper and foxpro do it, and there is really no downside //since for char fields decimal count should be zero for other versions that do not support this extended functionality. //----------------------------------------------------------------------------------------------------------------------- int nfieldlength = 0; int ndecimals = 0; if (cdbasetype == 'c' || cdbasetype == 'c') { //treat decimal count as high byte nfieldlength = (int)reader.readuint16(); } else { //read field length as an unsigned byte. nfieldlength = (int)reader.readbyte(); //read decimal count as one byte ndecimals = (int)reader.readbyte(); } //read the reserved bytes. reader.readbytes(14); //create and add field to collection mfields.add(new dbfcolumn(sfieldname, dbfcolumn.getdbasetype(cdbasetype), nfieldlength, ndecimals, ndataoffset)); // add up address information, you can not trust the address recorded in the dbf file... ndataoffset += nfieldlength; } // last byte is a marker for the end of the field definitions. reader.readbytes(1); //read any extra header bytes...move to first record //equivalent to reader.basestream.seek(mheaderlength, seekorigin.begin) except that we are not using the seek function since //we need to support streams that can not seek like web connections. int nextrareadbytes = mheaderlength - (filedescriptorsize + (columndescriptorsize * mfields.count)); if (nextrareadbytes > 0) reader.readbytes(nextrareadbytes); //if the stream is not forward-only, calculate number of records using file size, //sometimes the header does not contain the correct record count //if we are reading the file from the web, we have to use readnext() functions anyway so //number of records is not so important and we can trust the dbf to have it stored correctly. if (reader.basestream.canseek && mnumrecords == 0) { //notice here that we subtract file end byte which is supposed to be 0x1a, //but some dbf files are incorrectly written without this byte, so we round off to nearest integer. //that gives a correct result with or without ending byte. if (mrecordlength > 0) mnumrecords = (uint)math.round(((double)(reader.basestream.length - mheaderlength - 1) / mrecordlength)); } //lock header since it was read from a file. we don't want it modified because that would corrupt the file. //user can override this lock if really necessary by calling unlock() method. mlocked = true; //clear dirty bit misdirty = false; } public object clone() { return this.memberwiseclone(); } } }
源文件:dbfrecord.cs
/// /// author: ahmed lacevic /// date: 12/1/2007 /// desc: /// /// revision history: /// ----------------------------------- /// author: /// date: /// desc: using system; using system.collections.generic; using system.text; using system.io; using system.globalization; namespace socialexplorer.io.fastdbf { /// <summary> /// use this class to create a record and write it to a dbf file. you can use one record object to write all records!! /// it was designed for this kind of use. you can do this by clearing the record of all data /// (call clear() method) or setting values to all fields again, then write to dbf file. /// this eliminates creating and destroying objects and optimizes memory use. /// /// once you create a record the header can no longer be modified, since modifying the header would make a corrupt dbf file. /// </summary> public class dbfrecord { /// <summary> /// header provides information on all field types, sizes, precision and other useful information about the dbf. /// </summary> private dbfheader mheader = null; /// <summary> /// dbf data are a mix of ascii characters and binary, which neatly fit in a byte array. /// binarywriter would esentially perform the same conversion using the same encoding class. /// </summary> private byte[] mdata = null; /// <summary> /// zero based record index. -1 when not set, new records for example. /// </summary> private int mrecordindex = -1; /// <summary> /// empty record array reference used to clear fields quickly (or entire record). /// </summary> private readonly byte[] memptyrecord = null; /// <summary> /// specifies whether we allow strings to be truncated. if false and string is longer than we can fit in the field, an exception is thrown. /// </summary> private bool mallowstringtruncate = true; /// <summary> /// specifies whether we allow the decimal portion of numbers to be truncated. /// if false and decimal digits overflow the field, an exception is thrown. /// </summary> private bool mallowdecimaltruncate = false; /// <summary> /// specifies whether we allow the integer portion of numbers to be truncated. /// if false and integer digits overflow the field, an exception is thrown. /// </summary> private bool mallowintegertruncate = false; //array used to clear decimals, we can clear up to 40 decimals which is much more than is allowed under dbf spec anyway. //note: 48 is ascii code for 0. private static readonly byte[] mdecimalclear = new byte[] {48,48,48,48,48,48,48,48,48,48,48,48,48,48,48, 48,48,48,48,48,48,48,48,48,48,48,48,48,48,48, 48,48,48,48,48,48,48,48,48,48,48,48,48,48,48}; //warning: do not make this one static because that would not be thread safe!! the reason i have //placed this here is to skip small memory allocation/deallocation which fragments memory in .net. private int[] mtempintval = { 0 }; //ascii encoder private readonly encoding encoding = encoding.ascii; /// <summary> /// column name to column index map /// </summary> private readonly dictionary<string, int> mcolnametoconidx = new dictionary<string, int>(stringcomparer.invariantculture); /// <summary> /// /// </summary> /// <param name="oheader">dbf header will be locked once a record is created /// since the record size is fixed and if the header was modified it would corrupt the dbf file.</param> public dbfrecord(dbfheader oheader) { mheader = oheader; mheader.locked = true; //create a buffer to hold all record data. we will reuse this buffer to write all data to the file. mdata = new byte[mheader.recordlength]; memptyrecord = mheader.emptydatarecord; encoding = oheader.encoding; for (int i = 0; i < oheader.mfields.count; i++) mcolnametoconidx[oheader.mfields[i].name] = i; } /// <summary> /// set string data to a column, if the string is longer than specified column length it will be truncated! /// if dbf column type is not a string, input will be treated as dbf column /// type and if longer than length an exception will be thrown. /// </summary> /// <param name="ncolindex"></param> /// <returns></returns> public string this[int ncolindex] { set { dbfcolumn ocol = mheader[ncolindex]; dbfcolumn.dbfcolumntype ocoltype = ocol.columntype; // //if an empty value is passed, we just clear the data, and leave it blank. //note: test have shown that testing for null and checking length is faster than comparing to "" empty str :) //------------------------------------------------------------------------------------------------------------ if (string.isnullorempty(value)) { //this is like null data, set it to empty. i looked at sas dbf output when a null value exists //and empty data are output. we get the same result, so this looks good. buffer.blockcopy(memptyrecord, ocol.dataaddress, mdata, ocol.dataaddress, ocol.length); } else { //set values according to data type: //------------------------------------------------------------- if (ocoltype == dbfcolumn.dbfcolumntype.character) { if (!mallowstringtruncate && value.length > ocol.length) throw new dbfdatatruncateexception("value not set. string truncation would occur and allowstringtruncate flag is set to false. to supress this exception change allowstringtruncate to true."); //blockcopy copies bytes. first clear the previous value, then set the new one. buffer.blockcopy(memptyrecord, ocol.dataaddress, mdata, ocol.dataaddress, ocol.length); encoding.getbytes(value, 0, value.length > ocol.length ? ocol.length : value.length, mdata, ocol.dataaddress); } else if (ocoltype == dbfcolumn.dbfcolumntype.number) { if (ocol.decimalcount == 0) { //integers //---------------------------------- //throw an exception if integer overflow would occur if (!mallowintegertruncate && value.length > ocol.length) throw new dbfdatatruncateexception("value not set. integer does not fit and would be truncated. allowintegertruncate is set to false. to supress this exception set allowintegertruncate to true, although that is not recomended."); //clear all numbers, set to [space]. //----------------------------------------------------- buffer.blockcopy(memptyrecord, 0, mdata, ocol.dataaddress, ocol.length); //set integer part, careful not to overflow buffer! (truncate instead) //----------------------------------------------------------------------- int nnumlen = value.length > ocol.length ? ocol.length : value.length; encoding.getbytes(value, 0, nnumlen, mdata, (ocol.dataaddress + ocol.length - nnumlen)); } else { ///todo: we can improve perfomance here by not using temp char arrays cdec and cnum, ///simply direcly copy from source string using encoding! //break value down into integer and decimal portions //-------------------------------------------------------------------------- int nidxdecimal = value.indexof('.'); //index where the decimal point occurs char[] cdec = null; //decimal portion of the number char[] cnum = null; //integer portion if (nidxdecimal > -1) { cdec = value.substring(nidxdecimal + 1).trim().tochararray(); cnum = value.substring(0, nidxdecimal).tochararray(); //throw an exception if decimal overflow would occur if (!mallowdecimaltruncate && cdec.length > ocol.decimalcount) throw new dbfdatatruncateexception("value not set. decimal does not fit and would be truncated. allowdecimaltruncate is set to false. to supress this exception set allowdecimaltruncate to true."); } else cnum = value.tochararray(); //throw an exception if integer overflow would occur if (!mallowintegertruncate && cnum.length > ocol.length - ocol.decimalcount - 1) throw new dbfdatatruncateexception("value not set. integer does not fit and would be truncated. allowintegertruncate is set to false. to supress this exception set allowintegertruncate to true, although that is not recomended."); //clear all decimals, set to 0. //----------------------------------------------------- buffer.blockcopy(mdecimalclear, 0, mdata, (ocol.dataaddress + ocol.length - ocol.decimalcount), ocol.decimalcount); //clear all numbers, set to [space]. buffer.blockcopy(memptyrecord, 0, mdata, ocol.dataaddress, (ocol.length - ocol.decimalcount)); //set decimal numbers, careful not to overflow buffer! (truncate instead) //----------------------------------------------------------------------- if (nidxdecimal > -1) { int nlen = cdec.length > ocol.decimalcount ? ocol.decimalcount : cdec.length; encoding.getbytes(cdec, 0, nlen, mdata, (ocol.dataaddress + ocol.length - ocol.decimalcount)); } //set integer part, careful not to overflow buffer! (truncate instead) //----------------------------------------------------------------------- int nnumlen = cnum.length > ocol.length - ocol.decimalcount - 1 ? (ocol.length - ocol.decimalcount - 1) : cnum.length; encoding.getbytes(cnum, 0, nnumlen, mdata, ocol.dataaddress + ocol.length - ocol.decimalcount - nnumlen - 1); //set decimal point //----------------------------------------------------------------------- mdata[ocol.dataaddress + ocol.length - ocol.decimalcount - 1] = (byte)'.'; } } else if (ocoltype == dbfcolumn.dbfcolumntype.integer) { //note this is a binary integer type! //---------------------------------------------- ///todo: maybe there is a better way to copy 4 bytes from int to byte array. some memory function or something. mtempintval[0] = convert.toint32(value); buffer.blockcopy(mtempintval, 0, mdata, ocol.dataaddress, 4); } else if (ocoltype == dbfcolumn.dbfcolumntype.memo) { //copy 10 digits... ///todo: implement memo throw new notimplementedexception("memo data type functionality not implemented yet!"); } else if (ocoltype == dbfcolumn.dbfcolumntype.boolean) { if (string.compare(value, "true", true) == 0 || string.compare(value, "1", true) == 0 || string.compare(value, "t", true) == 0 || string.compare(value, "yes", true) == 0 || string.compare(value, "y", true) == 0) mdata[ocol.dataaddress] = (byte)'t'; else if (value == " " || value == "?") mdata[ocol.dataaddress] = (byte)'?'; else mdata[ocol.dataaddress] = (byte)'f'; } else if (ocoltype == dbfcolumn.dbfcolumntype.date) { //try to parse out date value using date.parse() function, then set the value datetime dateval; if (datetime.tryparse(value, out dateval)) { setdatevalue(ncolindex, dateval); } else throw new invalidoperationexception("date could not be parsed from source string! please parse the date and set the value (you can try using datetime.parse() or datetime.tryparse() functions)."); } else if (ocoltype == dbfcolumn.dbfcolumntype.binary) throw new invalidoperationexception("can not use string source to set binary data. use setbinaryvalue() and getbinaryvalue() functions instead."); else throw new invaliddataexception("unrecognized data type: " + ocoltype.tostring()); } } get { dbfcolumn ocol = mheader[ncolindex]; return new string(encoding.getchars(mdata, ocol.dataaddress, ocol.length)); } } /// <summary> /// set string data to a column, if the string is longer than specified column length it will be truncated! /// if dbf column type is not a string, input will be treated as dbf column /// type and if longer than length an exception will be thrown. /// </summary> /// <param name="ncolname"></param> /// <returns></returns> public string this[string ncolname] { get { if (mcolnametoconidx.containskey(ncolname)) return this[mcolnametoconidx[ncolname]]; throw new invalidoperationexception(string.format("there's no column with name '{0}'", ncolname)); } set { if (mcolnametoconidx.containskey(ncolname)) this[mcolnametoconidx[ncolname]] = value; else throw new invalidoperationexception(string.format("there's no column with name '{0}'", ncolname)); } } /// <summary> /// get date value. /// </summary> /// <param name="ncolindex"></param> /// <returns></returns> public datetime getdatevalue(int ncolindex) { dbfcolumn ocol = mheader[ncolindex]; if (ocol.columntype == dbfcolumn.dbfcolumntype.date) { string sdateval = encoding.getstring(mdata, ocol.dataaddress, ocol.length); return datetime.parseexact(sdateval, "yyyymmdd", cultureinfo.invariantculture); } else throw new exception("invalid data type. column '" + ocol.name + "' is not a date column."); } /// <summary> /// get date value. /// </summary> /// <param name="ncolindex"></param> /// <returns></returns> public void setdatevalue(int ncolindex, datetime value) { dbfcolumn ocol = mheader[ncolindex]; dbfcolumn.dbfcolumntype ocoltype = ocol.columntype; if (ocoltype == dbfcolumn.dbfcolumntype.date) { //format date and set value, date format is like this: yyyymmdd //------------------------------------------------------------- encoding.getbytes(value.tostring("yyyymmdd"), 0, ocol.length, mdata, ocol.dataaddress); } else throw new exception("invalid data type. column is of '" + ocol.columntype.tostring() + "' type, not date."); } /// <summary> /// clears all data in the record. /// </summary> public void clear() { buffer.blockcopy(memptyrecord, 0, mdata, 0, memptyrecord.length); mrecordindex = -1; } /// <summary> /// returns a string representation of this record. /// </summary> /// <returns></returns> public override string tostring() { return new string(encoding.getchars(mdata)); } /// <summary> /// gets/sets a zero based record index. this information is not directly stored in dbf. /// it is the location of this record within the dbf. /// </summary> /// <remarks> /// this property is managed from outside this object, /// cdbffile object updates it when records are read. the reason we don't set it in the read() /// function within this object is that the stream can be forward-only so the position property /// is not available and there is no way to figure out what index the record was unless you /// count how many records were read, and that's exactly what cdbffile does. /// </remarks> public int recordindex { get { return mrecordindex; } set { mrecordindex = value; } } /// <summary> /// returns/sets flag indicating whether this record was tagged deleted. /// </summary> /// <remarks>use cdbf4file.compress() function to rewrite dbf removing records flagged as deleted.</remarks> /// <seealso cref="cdbf4file.compress() function"/> public bool isdeleted { get { return mdata[0] == '*'; } set { mdata[0] = value ? (byte)'*' : (byte)' '; } } /// <summary> /// specifies whether strings can be truncated. if false and string is longer than can fit in the field, an exception is thrown. /// default is true. /// </summary> public bool allowstringturncate { get { return mallowstringtruncate; } set { mallowstringtruncate = value; } } /// <summary> /// specifies whether to allow the decimal portion of numbers to be truncated. /// if false and decimal digits overflow the field, an exception is thrown. default is false. /// </summary> public bool allowdecimaltruncate { get { return mallowdecimaltruncate; } set { mallowdecimaltruncate = value; } } /// <summary> /// specifies whether integer portion of numbers can be truncated. /// if false and integer digits overflow the field, an exception is thrown. /// default is false. /// </summary> public bool allowintegertruncate { get { return mallowintegertruncate; } set { mallowintegertruncate = value; } } /// <summary> /// returns header object associated with this record. /// </summary> public dbfheader header { get { return mheader; } } /// <summary> /// get column by index. /// </summary> /// <param name="index"></param> /// <returns></returns> public dbfcolumn column(int index) { return mheader[index]; } /// <summary> /// get column by name. /// </summary> /// <param name="index"></param> /// <returns></returns> public dbfcolumn column(string sname) { return mheader[sname]; } /// <summary> /// gets column count from header. /// </summary> public int columncount { get { return mheader.columncount; } } /// <summary> /// finds a column index by searching sequentially through the list. case is ignored. returns -1 if not found. /// </summary> /// <param name="sname">column name.</param> /// <returns>column index (0 based) or -1 if not found.</returns> public int findcolumn(string sname) { return mheader.findcolumn(sname); } /// <summary> /// writes data to stream. make sure stream is positioned correctly because we simply write out the data to it. /// </summary> /// <param name="osw"></param> protected internal void write(stream osw) { osw.write(mdata, 0, mdata.length); } /// <summary> /// writes data to stream. make sure stream is positioned correctly because we simply write out data to it, and clear the record. /// </summary> /// <param name="osw"></param> protected internal void write(stream obw, bool bclearrecordafterwrite) { obw.write(mdata, 0, mdata.length); if (bclearrecordafterwrite) clear(); } /// <summary> /// read record from stream. returns true if record read completely, otherwise returns false. /// </summary> /// <param name="obr"></param> /// <returns></returns> protected internal bool read(stream obr) { return obr.read(mdata, 0, mdata.length) >= mdata.length; } protected internal string readvalue(stream obr, int colindex) { dbfcolumn ocol = mheader[colindex]; return new string(encoding.getchars(mdata, ocol.dataaddress, ocol.length)); } } }
第三步:创建dbf文件
public void createdbf(string dbfpath, string dbfname) { var odbf = new dbffile(encoding.getencoding(936)); odbf.open(path.combine(dbfpath, dbfname + ".dbf"), filemode.create); //创建列头 odbf.header.addcolumn(new dbfcolumn("yhbh", dbfcolumn.dbfcolumntype.character, 20, 0)); odbf.header.addcolumn(new dbfcolumn("sbbh", dbfcolumn.dbfcolumntype.character, 20, 0)); odbf.header.addcolumn(new dbfcolumn("yhmc", dbfcolumn.dbfcolumntype.character, 64, 0)); odbf.header.addcolumn(new dbfcolumn("yhzz", dbfcolumn.dbfcolumntype.character, 100, 0)); odbf.header.addcolumn(new dbfcolumn("sbwz", dbfcolumn.dbfcolumntype.character, 100, 0)); odbf.header.addcolumn(new dbfcolumn("dh", dbfcolumn.dbfcolumntype.character, 50, 0)); odbf.header.addcolumn(new dbfcolumn("yddh", dbfcolumn.dbfcolumntype.character, 50, 0)); odbf.header.addcolumn(new dbfcolumn("cbbh", dbfcolumn.dbfcolumntype.character, 8, 0)); odbf.header.addcolumn(new dbfcolumn("cbxh", dbfcolumn.dbfcolumntype.number, 8, 0)); odbf.header.addcolumn(new dbfcolumn("ysxz", dbfcolumn.dbfcolumntype.number, 12, 0)); odbf.header.addcolumn(new dbfcolumn("sbqd", dbfcolumn.dbfcolumntype.number, 12, 0)); odbf.header.addcolumn(new dbfcolumn("bjsl", dbfcolumn.dbfcolumntype.number, 12, 0)); odbf.header.addcolumn(new dbfcolumn("sbzd", dbfcolumn.dbfcolumntype.number, 12, 0)); odbf.header.addcolumn(new dbfcolumn("sjys", dbfcolumn.dbfcolumntype.number, 12, 0)); odbf.header.addcolumn(new dbfcolumn("cbrq", dbfcolumn.dbfcolumntype.character, 20, 0)); odbf.header.addcolumn(new dbfcolumn("sbyxzt", dbfcolumn.dbfcolumntype.character, 30, 0)); odbf.header.addcolumn(new dbfcolumn("sfgs", dbfcolumn.dbfcolumntype.character, 1, 0)); odbf.header.addcolumn(new dbfcolumn("cbbz", dbfcolumn.dbfcolumntype.character, 1, 0)); odbf.header.addcolumn(new dbfcolumn("sbkj", dbfcolumn.dbfcolumntype.number, 8, 0)); odbf.header.addcolumn(new dbfcolumn("qyl", dbfcolumn.dbfcolumntype.number, 8, 0)); odbf.header.addcolumn(new dbfcolumn("qfje", dbfcolumn.dbfcolumntype.number, 15, 2)); odbf.header.addcolumn(new dbfcolumn("yhdj", dbfcolumn.dbfcolumntype.number, 8, 2)); odbf.header.addcolumn(new dbfcolumn("sccbrq", dbfcolumn.dbfcolumntype.character, 20, 0)); odbf.header.addcolumn(new dbfcolumn("scsl", dbfcolumn.dbfcolumntype.number, 12, 0)); odbf.header.addcolumn(new dbfcolumn("sccjd", dbfcolumn.dbfcolumntype.number, 12, 0)); odbf.header.addcolumn(new dbfcolumn("isupdate", dbfcolumn.dbfcolumntype.character, 1, 0));
第四步:将数据写入dbf
var orec = new dbfrecord(odbf.header) { allowdecimaltruncate = true }; foreach (var item in writedtolist) { orec[0] = item.yhbh; orec[1] = item.sbbh; orec[2] = item.yhmc; orec[3] = item.yhzz; orec[4] = item.sbwz; orec[5] = item.cbbh; orec[6] = item.cbxh.tostring(); orec[7] = item.ysxz; orec[8] = item.sbqd.tostring(); orec[9] = item.bjsl.tostring(); orec[10] = item.sbzd.tostring(); orec[11] = item.sjys.tostring(); orec[12] = item.cbrq; orec[13] = item.sbyxzt; orec[14] = item.sfgs; orec[15] = item.cbbz; orec[16] = item.qyl.tostring(); orec[17] = item.yhdj.tostring(); odbf.write(orec, true); } odbf.close();
writedtolist: list<writedto> writedtolist = new list<writedto>();
读取dbf,将dbf转化为datatable
/// <summary> /// 从dbf读取文件到datatable /// </summary> /// <param name="filename">dbf的完整路径:如e:\2222.dbf</param> /// <returns></returns> public static datatable dbftodatatable(string filename) { try { //返回的结果集 datatable dt = new datatable(); //获取一个dbf文件对象 dbffile dbf = new dbffile(encoding.default); dbf.open(filename, filemode.open); //创建datatable的结构(列名) dbfheader dh = dbf.header; for (int index = 0; index < dh.columncount; index++) { dt.columns.add(dh[index].name); } //加载数据到datatable里 int i = 0; while (dbf.read(i) != null) { //获取一行 dbfrecord record = dbf.read(i); //将改行数据放到datarow里 datarow dr = dt.newrow(); object[] objs = new object[record.columncount]; for (int index = 0; index < record.columncount; index++) { objs[index] = record[index]; } dr.itemarray = objs; dt.rows.add(dr); i++; } dbf.close(); return dt; } catch (exception ex) { throw new exception(ex.message); } }
writedto模型:
上一篇: 新茶都在什么时候出,这些小知识你知道吗
推荐阅读
-
.NET使用FastDBF读写DBF
-
.NET读写Excel工具Spire.Xls使用入门教程(1)
-
.NET读写Excel工具Spire.Xls使用 Excel文件的控制(2)
-
.NET读写Excel工具Spire.Xls使用 Excel单元格控制(3)
-
.NET读写Excel工具Spire.Xls使用 对数据操作与控制(4)
-
.NET使用FastDBF读写DBF
-
.NET读写Excel工具Spire.Xls使用 Excel文件的控制(2)
-
.NET读写Excel工具Spire.Xls使用 Excel单元格控制(3)
-
.NET读写Excel工具Spire.Xls使用 重量级的Excel图表功能(5)
-
.NET读写Excel工具Spire.Xls使用入门教程(1)