Java SE IO详解
程序员文章站
2022-06-09 21:09:57
...
Java SE IO详解
- IO可以分为以下两种
- 字节流
- InputStream
- OutputStream
- 字符流
- Reader
- Writer
1. 字节流读取数据:InputStream
/**
* This abstract class is the superclass of all classes representing
* an input stream of bytes.
*
* <p> Applications that need to define a subclass of <code>InputStream</code>
* must always provide a method that returns the next byte of input.
*
* @author Arthur van Hoff
* @see java.io.BufferedInputStream
* @see java.io.ByteArrayInputStream
* @see java.io.DataInputStream
* @see java.io.FilterInputStream
* @see java.io.InputStream#read()
* @see java.io.OutputStream
* @see java.io.PushbackInputStream
* @since JDK1.0
*/
1.1 需求
读取本地一个文件的数据
1.2 数据
hadoop,spark
hbase,spark
hadoop
1.3 Code
package com.xk.bigdata.java.io.inputstream;
import java.io.FileInputStream;
import java.io.IOException;
public class InputStreamApp {
public static void main(String[] args) {
read();
}
private static void read() {
FileInputStream inputStream = null;
try {
// 创建文件 input 字节流
inputStream = new FileInputStream("java-basic/data/wc.data");
byte[] buffer = new byte[1024];
int length = 0;
while ((length = inputStream.read(buffer, 0, buffer.length)) != -1) {
String res = new String(buffer, 0, length);
System.out.println(res);
}
} catch (Exception e) {
e.printStackTrace();
} finally {
if (null != inputStream) {
try {
inputStream.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
}
1.4 结果
hadoop,spark
hbase,spark
hadoop
2 字节流写入数据:OutputStream
/**
* This abstract class is the superclass of all classes representing
* an output stream of bytes. An output stream accepts output bytes
* and sends them to some sink.
* <p>
* Applications that need to define a subclass of
* <code>OutputStream</code> must always provide at least a method
* that writes one byte of output.
*
* @author Arthur van Hoff
* @see java.io.BufferedOutputStream
* @see java.io.ByteArrayOutputStream
* @see java.io.DataOutputStream
* @see java.io.FilterOutputStream
* @see java.io.InputStream
* @see java.io.OutputStream#write(int)
* @since JDK1.0
*/
2.1 需求
把wc.data 数据写入到out/wc.data里面
2.2 数据
hadoop,spark
hbase,spark
hadoop
2.3 Code
package com.xk.bigdata.java.io.outputstream;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
public class OutputStreamApp {
public static void main(String[] args) {
write();
}
private static void write() {
FileInputStream inputStream = null;
FileOutputStream outputStream = null;
try {
// 创建文件字节 输入、输出流
inputStream = new FileInputStream("java-basic/data/wc.data");
outputStream = new FileOutputStream("java-basic/out/wc.data");
byte[] buffer = new byte[1024];
int length = 0;
while ((length = inputStream.read(buffer, 0, buffer.length)) != -1) {
outputStream.write(buffer, 0, length);
// 刷新输出字节流
outputStream.flush();
}
} catch (Exception e) {
e.printStackTrace();
} finally {
if (null != inputStream) {
try {
inputStream.close();
} catch (IOException e) {
e.printStackTrace();
}
}
if (null != outputStream) {
try {
outputStream.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
}
2.4 结果
hadoop,spark
hbase,spark
hadoop
3 字符流读取数据:Reader
/**
* Abstract class for reading character streams. The only methods that a
* subclass must implement are read(char[], int, int) and close(). Most
* subclasses, however, will override some of the methods defined here in order
* to provide higher efficiency, additional functionality, or both.
*
*
* @see BufferedReader
* @see LineNumberReader
* @see CharArrayReader
* @see InputStreamReader
* @see FileReader
* @see FilterReader
* @see PushbackReader
* @see PipedReader
* @see StringReader
* @see Writer
*
* @author Mark Reinhold
* @since JDK1.1
*/
3.1 需求
读取wc.data里面文件数据
3.2 数据
hadoop,spark
hbase,spark
hadoop
3.3 Code
package com.xk.bigdata.java.io.reader;
import java.io.FileReader;
import java.io.IOException;
public class ReaderApp {
public static void main(String[] args) {
read();
}
private static void read() {
FileReader reader = null;
try {
// 创建一个字符输入流
reader = new FileReader("java-basic/data/wc.data");
char[] buffle = new char[1024];
int length = 0;
while ((length = reader.read(buffle, 0, buffle.length)) != -1) {
String result = new String(buffle, 0, length);
System.out.println(result);
}
} catch (Exception e) {
e.printStackTrace();
} finally {
if (null != reader) {
try {
reader.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
}
3.4 结果
hadoop,spark
hbase,spark
hadoop
4 字符流写入数据:Writer
4.1 需求
把 Hello World 写入到out/wc.data里面
4.2 Code
package com.xk.bigdata.java.io.write;
import java.io.FileWriter;
import java.io.IOException;
public class WriteApp {
public static void main(String[] args) {
write();
}
private static void write() {
FileWriter writer = null;
try {
// 创建一个字符输出流
writer = new FileWriter("java-basic/out/wc.data");
writer.write("Hello World");
} catch (IOException e) {
e.printStackTrace();
}finally {
if (null != writer){
try {
writer.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
}
4.3 结果
Hello World
5 字节流转成字符流读取数据:BufferedReader
- BufferedReader
/**
* Reads text from a character-input stream, buffering characters so as to
* provide for the efficient reading of characters, arrays, and lines.
*
* <p> The buffer size may be specified, or the default size may be used. The
* default is large enough for most purposes.
*
* <p> In general, each read request made of a Reader causes a corresponding
* read request to be made of the underlying character or byte stream. It is
* therefore advisable to wrap a BufferedReader around any Reader whose read()
* operations may be costly, such as FileReaders and InputStreamReaders. For
* example,
*
* <pre>
* BufferedReader in
* = new BufferedReader(new FileReader("foo.in"));
* </pre>
*
* will buffer the input from the specified file. Without buffering, each
* invocation of read() or readLine() could cause bytes to be read from the
* file, converted into characters, and then returned, which can be very
* inefficient.
*
* <p> Programs that use DataInputStreams for textual input can be localized by
* replacing each DataInputStream with an appropriate BufferedReader.
*
* @see FileReader
* @see InputStreamReader
* @see java.nio.file.Files#newBufferedReader
*
* @author Mark Reinhold
* @since JDK1.1
*/
- InputStreamReader
/**
* An InputStreamReader is a bridge from byte streams to character streams: It
* reads bytes and decodes them into characters using a specified {@link
* java.nio.charset.Charset charset}. The charset that it uses
* may be specified by name or may be given explicitly, or the platform's
* default charset may be accepted.
*
* <p> Each invocation of one of an InputStreamReader's read() methods may
* cause one or more bytes to be read from the underlying byte-input stream.
* To enable the efficient conversion of bytes to characters, more bytes may
* be read ahead from the underlying stream than are necessary to satisfy the
* current read operation.
*
* <p> For top efficiency, consider wrapping an InputStreamReader within a
* BufferedReader. For example:
*
* <pre>
* BufferedReader in
* = new BufferedReader(new InputStreamReader(System.in));
* </pre>
*
* @see BufferedReader
* @see InputStream
* @see java.nio.charset.Charset
*
* @author Mark Reinhold
* @since JDK1.1
*/
- 用字节流把数据读取出来
- 把字节流转为字节流
5.1 需求
读取wc.data里面的数据
5.2 数据
hadoop,spark
hbase,spark
hadoop
5.3 Code
package com.xk.bigdata.java.io.buffered;
import java.io.BufferedReader;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStreamReader;
public class BufferedReaderApp {
public static void main(String[] args) {
read();
}
private static void read() {
BufferedReader reader = null;
String result = null;
try {
// 创建一个字节流转字符流的读取数据流
reader = new BufferedReader(new InputStreamReader(new FileInputStream("java-basic/data/wc.data")));
while ((result = reader.readLine()) != null) {
System.out.println(result);
}
} catch (Exception e) {
e.printStackTrace();
} finally {
if (null != reader) {
try {
reader.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
}
5.4 结果
hadoop,spark
hbase,spark
hadoop
6 字符流转成字节流写入数据:BufferedWriter
- BufferedWriter
/**
* Writes text to a character-output stream, buffering characters so as to
* provide for the efficient writing of single characters, arrays, and strings.
*
* <p> The buffer size may be specified, or the default size may be accepted.
* The default is large enough for most purposes.
*
* <p> A newLine() method is provided, which uses the platform's own notion of
* line separator as defined by the system property <tt>line.separator</tt>.
* Not all platforms use the newline character ('\n') to terminate lines.
* Calling this method to terminate each output line is therefore preferred to
* writing a newline character directly.
*
* <p> In general, a Writer sends its output immediately to the underlying
* character or byte stream. Unless prompt output is required, it is advisable
* to wrap a BufferedWriter around any Writer whose write() operations may be
* costly, such as FileWriters and OutputStreamWriters. For example,
*
* <pre>
* PrintWriter out
* = new PrintWriter(new BufferedWriter(new FileWriter("foo.out")));
* </pre>
*
* will buffer the PrintWriter's output to the file. Without buffering, each
* invocation of a print() method would cause characters to be converted into
* bytes that would then be written immediately to the file, which can be very
* inefficient.
*
* @see PrintWriter
* @see FileWriter
* @see OutputStreamWriter
* @see java.nio.file.Files#newBufferedWriter
*
* @author Mark Reinhold
* @since JDK1.1
*/
- OutputStreamWriter
/**
* An OutputStreamWriter is a bridge from character streams to byte streams:
* Characters written to it are encoded into bytes using a specified {@link
* java.nio.charset.Charset charset}. The charset that it uses
* may be specified by name or may be given explicitly, or the platform's
* default charset may be accepted.
*
* <p> Each invocation of a write() method causes the encoding converter to be
* invoked on the given character(s). The resulting bytes are accumulated in a
* buffer before being written to the underlying output stream. The size of
* this buffer may be specified, but by default it is large enough for most
* purposes. Note that the characters passed to the write() methods are not
* buffered.
*
* <p> For top efficiency, consider wrapping an OutputStreamWriter within a
* BufferedWriter so as to avoid frequent converter invocations. For example:
*
* <pre>
* Writer out
* = new BufferedWriter(new OutputStreamWriter(System.out));
* </pre>
*
* <p> A <i>surrogate pair</i> is a character represented by a sequence of two
* <tt>char</tt> values: A <i>high</i> surrogate in the range '\uD800' to
* '\uDBFF' followed by a <i>low</i> surrogate in the range '\uDC00' to
* '\uDFFF'.
*
* <p> A <i>malformed surrogate element</i> is a high surrogate that is not
* followed by a low surrogate or a low surrogate that is not preceded by a
* high surrogate.
*
* <p> This class always replaces malformed surrogate elements and unmappable
* character sequences with the charset's default <i>substitution sequence</i>.
* The {@linkplain java.nio.charset.CharsetEncoder} class should be used when more
* control over the encoding process is required.
*
* @see BufferedWriter
* @see OutputStream
* @see java.nio.charset.Charset
*
* @author Mark Reinhold
* @since JDK1.1
*/
- 先把需要写入的数据转成字节流
- 再把字节流写入到目标文件里面
6.1 需求
把Hello World写到out/wc.data里面
6.2 Code
package com.xk.bigdata.java.io.buffered;
import java.io.*;
public class BufferedWriterApp {
public static void main(String[] args) {
write();
}
private static void write() {
BufferedWriter writer = null;
try {
writer = new BufferedWriter(new OutputStreamWriter(new FileOutputStream("java-basic/out/wc.data")));
writer.write("Hello World");
writer.flush();
} catch (Exception e) {
e.printStackTrace();
} finally {
if (null != writer) {
try {
writer.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
}
6.3 结果
Hello World
上一篇: IO(Java SE)