2017年问题汇总-待整理 博客分类: 问题汇总 thread
程序员文章站
2024-02-05 17:54:10
...
查询时应注意的事项
1.从库查询,减轻主库压力
2.缓存,不是实时要求的数据,放入缓存
Excel导入异常Cannot get a text value from a numeric cell解决
http://blog.csdn.net/ysughw/article/details/9288307
doubleCheck问题
ABA问题 CAS 加version
乐观锁MySQL
zookeeper 应用
finally return exit(0) 的区别
java 中 int 类型使用 Byte接收可否?
Mybatis xml
foreach 的参数一定要判空,
否则 in () 会报语法错误
ps aux | grep 'jrcore' | grep nginx
jps (root 权限)
jsp 相应进程编号
什么是前后端分离项目?如何实现?
QPS 的概念,每日达到多少,每日的交易额?
数据更新,分布式环境下如何保证数据的一致性
答:同乐观锁。记录version,如何读取记录到的version?分布式环境如何保证读取的 version 是正确的
答:---
Excel 导入导出功能,将导出导入功能写入 core 层 即 service 中,通过HTTP方式访问 core,网上的POI 公共代码示例中需要 RESPONSE 的参数,在将此参数传入到底层时异常
Could not write content: Infinite recursion (*Error) (through reference chain:
map中存放了 response 、 request ,通过 map 传递时出现JSON 转化异常
REST 的 HTTP 访问方式时会出现下面的问题
JRDResponse<PageDto> 接收到的返回值中 result 是 LinkedHashMap 而不是 PO ?
请求接口,接收的返回值为 PageDto 中 List 变为 Element Object[] 数组,数组中为 LinkedHashMap
PageDto List result 没有指定泛型
解释:https://www.cnblogs.com/timlong/p/3916240.html
反序列化时无法确认List中的泛型
List<T> 作为返回结果,不会出现上述的问题
http://blog.csdn.net/wantken/article/details/42643901
ObjectMapper mapper = new ObjectMapper();
try {
PageDTO myObjects =
mapper.readValue(response.getEntity().toString(), new TypeReference<PageDTO>(){});
logger.info("myObjects:"+myObjects);
} catch (JsonParseException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (JsonMappingException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
Life isn't about waiting for the storm to pass, it's about learning to dance in the rain.
生活不是等着暴风雨过去,而是学会在风雨中跳舞。
日落西山你不陪,君临天下你是谁
东山再起你不陪
----------------------------------
前后端分离项目:
接收汉字传值:
本地环境:
sourceName = URLDecoder.decode(sourceName, "UTF-8"); // 不起作用
sourceName = new String(sourceName.getBytes("ISO-8859-1"),"UTF-8"); // 乱码处理
其他环境:
sourceName = URLDecoder.decode(sourceName, "UTF-8"); // 乱码处理
sourceName = new String(sourceName.getBytes("ISO-8859-1"),"UTF-8"); // 执行后汉字乱码
环境问题?编码格式
前端提交的时间,接收时是Long型,需要做转化处理
----------------------------------------
使用软引用做敏感数据的缓存
PO 类 - 敏感数据的实体类
ReferenceQueue<PO> queue 将PO类放入引用队列
SoftReference<PO,queue> 创建PO类的对应的软引用实例
PUT
queue.add(po)
GET
queue 中取值
---------------------------------------
select count(c_user) from
(select user_id as c_user from jr_td_project_invest union select c_user from p2p_td_project_invest ) as count_investor;
-- 改写成:
select count(distinct c_user) from
(select distinct user_id as c_user from jr_td_project_invest union all select distinct c_user from p2p_td_project_invest ) as count_investor;
distinct 取出重复数据
union all 合并不去除重复数据,即会列出所有的值,不进行合并
union 合并且去重
前者:两个大的集合合并去重,再检索
后者:将两个不含重复的数据的集合进行合并不去重,再在合并后的集合中去重
通过explain 查询结果显示,后者 rows :MYSQL认为必须检查的用来返回请求数据的行数
要比前者少了50%
170638
6.1 union与union all的效率
从效率上说,UNION ALL 要比UNION快很多。
UNION ALL只是简单的将两个结果合并后就返回。
这样,如果返回的两个结果集中有重复的数据,
那么返回的结果集就会包含重复的数据了。
UNION在进行表链接后会筛选掉重复的记录,
所以在表链接后会对所产生的结果集进行排序运算,
删除重复的记录再返回结果。
如果可以确认合并的两个结果集中不包含重复的数据的话,
那么就使用UNION ALL。
http://blog.csdn.net/jesseyoung/article/details/40427849
union all是对数据行的合并,
比如说:select name,age,sex,telphone from A union all select name,age,sex,telphone from B,
意思是说把A表中的name,age,sex,telphone和B表中的name,age,sex,telphone字段进行合并,在合并的同时一定要确定字段类型相同,这时无论数据是否相同都会合并到一个表中。而union在进行并的时候,先会执行一下distinct的操作,就是把相同的行给去掉,然后再进行合并。
http://blog.csdn.net/liuxiaohuago/article/details/7075371
distinct 是全表数据的查询,对比全表的数据后得出查询结果
---------------------------------------
JSP 自定义标签库
分页插件
1.pom.xml 引入依赖
2.编写自定义方法继承TagSupport
3.编写对应的*.tld文件
4.web.xml 加载
5.使用
<%@ taglib prefix="urlChangePage" uri="/urlChangePage"%>
<urlChangePage:pages /> pages 为java端setAttribute("pages",pages);
或
<%@ taglib uri="/dateConvert" prefix="ct"%>
<ct:longStr longTime="1314842011312"></ct:longStr>
是否在Web.xml 中配置与文件存放位置有关?
在WEB-INF目录下放置 *.tld文件
http://ldq2010-163-com.iteye.com/blog/1163236
-------------------------------------------
--------------------------------------------
eclipse 使用 server 中 tomcat publish or clean
Publishing failed with multiple errors file not found
原因:项目工程文件删除,但eclipse里面仍显示存在。
解决方案:刷新项目工程,重新部署,发布项目。
工程更新一下就解决了
原因:工程中的文件放生了改变,但eclipse里面显示存在,刷新一下就好使了。
出错原因是因为文件系统不同步,手动刷新一下资源管理器.
在 CSS 下新增 revision 文件夹,再添加子文件夹 about ,在 about 中添加 .css 文件
启动后提示无法找到
解决:将 CSS 文件移动到 revision 文件夹内即可
--------------------------------------------
href = "/about" 链接中无相对域名
href = "about" 相对域名/about
--------------------------------------------
写过中间件、联合索引、开源代码、项目介绍
高并发在线解决方案,非JOB
堆栈溢出,如何处理
常用命令 jps jstack jconsole
xxx class no find
添加build path ,将找不到的类所在的项目加入进来
tinyint(1) 映射 BOOLEAN
tinyint(3) 映射 Byte
----------------------------------------------
SQL: SELECT C_FINANCED FROM p2p_td_project WHERE C_LOAN_END_DATE LIKE ? AND C_FINANCED > 0
### Cause: java.lang.UnsupportedOperationExceptionorg.mybatis.spring.MyBatisSystemException: nested exception is org.apache.ibatis.exceptions.PersistenceException:
### Error querying database. Cause: java.lang.UnsupportedOperationException
like #{xxx} 报错
xxx = 2017-11% java 端拼接
resultType = list 所以出错
----------------------------------------------
1.日志打印输出
log4j
与
logback 均有输出
按照配置的输出日志的目录都有查询到
web.xml 中监听器配置的是 log4j2
http://blog.csdn.net/liaoxiaohua1981/article/details/6760423
那logback 如何起作用
程序中输出日志应该使用?
http://blog.sina.com.cn/s/blog_4adc4b090102vx0z.html
log4j
jrd-log 加载logback.xml ?
log4j 的日志是否打印
logback 的日志打印
jrcore 中
org.slf4j.Logger logger = LoggerFactory.getLogger(this.getClass()); 打印日志在测试环境可查看到
org.apache.log4j.Logger logger = Logger.getLogger(this.getClass()); 打印日志在测试环境未能看到
-------------------------------------------------
2.eclipse open Perspective
透视图是一个包含一系列视图和内容编辑器的可视容器
3.分布式锁、池化
4.ams-log
-------------------------------------------------
北京1.。
标识顺序的字节码出现一次
北 长度4
北京 长度6
北京1.人。
[-2, -1, 83, 23, 78, -84, 0, 49, 0, 46, 78, -70, 48, 2]
当前编辑器默认是大头
public static String bSubstring(String s, int length, int excepLength) {
try {
s = s.replaceAll("\\s*", "");
byte[] bytes = s.getBytes("Unicode"); // Unicode 默认按照UnicodeLittleUnmarked 小头规则,一个字符两个字节表示,高位在第二个字节的位置
int n = 0; // 表示当前的字节数
int i = 2; // 要截取的字节数,从第3个字节开始
// 0 1 2 按照数组下标的顺序取值;前两位为顺序标识,即告知系统当前的编码顺序时小头;系统默认处理?
for (; i < bytes.length && n < length; i++) {
// 奇数位置,如3、5、7等,为UCS2编码中两个字节的第二个字节
// 从 2 开始,奇数位加一,偶数为不为0加一,length 指字节的长度,而不是字符的长度;
// i 是当前的数组下标
if (i % 2 == 1) {
n++; // 在UCS2第二个字节时n加1
} else {
// 当UCS2编码的第一个字节不等于0时,该UCS2字符为汉字,一个汉字算两个字节
if (bytes[i] != 0) {
n++;
}
}
}
// 如果i为奇数时,处理成偶数
if (i % 2 == 1) { // 为奇数即字符串中含有数字字母,偶数个,此处灵活配置,因为两个字符标识一个字符,只是强制判断当前是汉字且只有一位,舍弃
// 该UCS2字符是汉字时,去掉这个截一半的汉字
if (bytes[i - 1] != 0)
i = i - 1;
// 该UCS2字符是字母或数字,则保留该字符
else
i = i + 1;
}
return new String(bytes, 0, i, "Unicode");
} catch (Exception e) {
JrdLogManager.runLog(Level.ERROR, "bSubstring exception :" + e);
return s.replaceAll("\\s*", "").length() > excepLength ? s.replaceAll("\\s*", "").substring(0, excepLength) : s;
}
}
UTF-8 BOM
UTF-8 无BOM
UCS-2 Big Endian
UCS-2 Little Endian
代码单元 Unicode code units
代码点
概念
------------------------------------------------------------------------------------------------------------------------------
Collctions.synchronized
Vector
-----------------------------------
String 为什么使用final 修饰?
https://www.zhihu.com/question/31345592
反例:使用StringBuilder 作为Set 的泛型,因为可变所以可以在Set对KEY进行修改,可能会导致KEY相同,从而破坏了Set中KEY去重的特性
String 线程池机制,在大量使用字符串的场景中节省存储空间,提高效率
String 引用数据类型,可以同其他基本数据类型一样在MAIN方法中调用而不会出现编译问题
不需声明为static 不需实例化之后再调用
声明类变量 static String 可以不使用 final 修饰
http://www.cnblogs.com/ikuman/archive/2013/08/27/3284410.html
主要是为了”安全性“和”效率“的缘故
http://www.cnblogs.com/hellowhy/p/6536590.html
----------------------------------------------------------------------
Mybatis resultType int or Integer
http://blog.csdn.net/xiangjai/article/details/53894466
attempted to return null from a method with a primitive return type (int)
返回类型resultType 为 int 或者 Integer 都可能发生上面的异常
解决方案:
1.IFNULL(XX,0) 给出一个默认值
2.返回一个引用数据类型,在上层添加判断
http://blog.csdn.net/iamlihongwei/article/details/72652384
----------------------------------------------------------------------
logback.xml 配置输出 Mybatis日志
日志输出级别
jrcore 中日志的配置 log4j log4j2 logback 以哪个配置为准?
以logback 为准,配置在系统级别,配置在其他JAR包,jrcore 依赖此JAR包,JAR包进行配置,jr-log
-----------------------------------------------------------------------
M 端接收中文会出现乱码
address = URLDecoder.decode(address, "UTF-8");
foreach 实现原理,底层使用 Iterator ,
remove 时调用了 NEXT
--------------------------------------------------
启动tomcat 内存溢出 eclipse
http://blog.csdn.net/suigaopeng/article/details/26720719
--------------------------------------------------------------------------------
底层 MyBatis 接口无查询结果,但使用SQL直接查询有结果
MyBatis
接口调用返回空值
1.字段部分使用*,导致 表中该字段与PO类之间无法映射
select *
改为 select 具体的字段
2.参数类型不正确 parameterType
参数
java.util.Map
或
map
直接 写Map 不识别
---------------------------------------------------------------------------------
引用本地的 COMMON 工程,在 COMMON 工程中 进行相应代码变更
重启电脑后,eclipse 中进行了初始化,原本引用的工程 common 变为了 jar 包,导致相应的修改失效:
引用 本地common 包,没有引用到 而变为了 公共的 jar 包
build path --> add project --> 添加 COMMON 工程到当前项目中
断点调试时,提示 执行哪一个( .clss .java 两个,两个依赖,一个是添加的项目,一个是引用的JAR包,选择 java 的)
进行调试,服务正常
---------------------------------------------------------------------------------
org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named 'XXXXX' is defined
将 tomcat 中的项目 重新发布 ; clean 或 publish
---------------------------------------------------------------------------------
SVN 合并代码时,若多人开发,以项目为单位进行合并,因为一个类可能多个人都修改过,若某一个版本没有合并,会导致代码冲突
---------------------------------------------------------------------------------
Excel导入异常Cannot get a text value from a numeric cell解决
http://blog.csdn.net/ysughw/article/details/9288307
---------------------------------------------------------------------------------
文件下载:路径读取,windows 与 linux 不同,配置为实际机器上的路径
File textf = new File("/data/j2ee/jr/excelModel", "loan_user_open_account_batch.xlsx");
---------------------------------------------------------------------------------
app上通过微信分享,再次分享,文字和图片不再展示
http://blog.csdn.net/yangzhen06061079/article/details/53436463
图片需大于 300 * 300
---------------------------------------------------------------------------------
扫码登录功能实现
---------------------------------------------------------------------------------
代码中引用不到依赖,删除 本地 maven 仓库中的 jar 包,
选中项目,右键 -> maven -- update project -> force update
重新 maven update -->
---------------------------------------------------------------------------------
项目提测:
涉及新增配置文件,将配置文件发送给运维,告知添加的路径
配置 URL = 某机器IP地址 ;
根据当天发版的情况而定,发布哪个环境,选择相应环境的机器
其他环境的机器由运维进行相应的修改;并添加限制,本地提交的代码不会对已上传的配置文件产生影响
----------------------------------------------------------------------------------
mapper
查询count 的 resultType = int , 若为 Integer ,需要对数据为空时进行判断,若为null ,赋值 0
需要对查询结果进行 ifNULL 判断
解决:
返回值定为 int ,不会出现上述的问题
----------------------------------------------------------------------------------
去掉参数打印日志 , 项目中有 基于AOP的切面日志输出
检查接口,无需 @valid ,添加会增加参数的校验过程,耗时
----------------------------------------------------------------------------------
mapper.xml 字段映射到 po ,属性字段 名称 不区分大小写
查询时提示映射找不到
PO中 有一个 Byte isUseable ;一个常量 ISUseable ;sql 中 is_useable as isUseable 所以找不到对应
-----------------------------------------------------------------------------------
js css 引用带有版本号码
map 中 value 为 String
JSON.toJSONString( Integer.parseInt(one_count)+1 )
map.getValue
String str = (String)map.getValue
Exception in thread "main" java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.String
Arrays.asList
List.toArray
-----------------------------------------
--------------------------------------------
// 标签使用;定义在方法上面
@JrdDbReadonly
// 标签定义
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
@Documented
public @interface JrdDbReadonly {
String value() default "readonly";
}
// 标签的注入
DataSourceAspect
// 设置主从库标识
DbContextHolder
DbSelectFilter
写入主库,读取从库
主从库同步,数据库本身完成
何时引入?标签如何起作用
加入两次
@WS层加入标签
调用的Service 层再次加入标签
第一次进入 切换到 从库
第二次进入 切换到 从库
第二次退出 切换到 主库
此时变为读入主库
-------------------------------
读写数据库的切换
事务的控制
-------------------------------
https://www.junrongdai.com/invest/index/138780
标的详情页面,图片未展示
--------------------------------------------------
远程访问数据
远程连接地址
182.92.5.40
JRD-2017-kettle
--------------------------------------------------
genericobjectpool borrowobject
java.util.NoSuchElementException: Could not create a validated object, cause: ValidateObject failed
apache 池化
@aspect的作用
@pointcut类的所有方法
java -version 不能执行
path 中 JAVA_HOME放在首位
不配置JRE的目录
eclipse 打不开
jdk 与 eclipse 版本不一致 64 位与 32 位
查看 jdk 版本 ,查看 eclipse版本
------------------------------------------------
mapper
查询count 的 resultType = int , 若为 Integer ,需要对数据为空时进行判断,若为null ,赋值 0
数据传输到JSP中.;<>等符号被转义成其他格式:
StringEscapeUtils.unescapeHtml4 处理后再传递
explain
exist
mysql 语句优化
1.explain
2.trident
查询数据,字段与本地结果不一致
1.检查是否有缓存
有缓存,缓存时间,清除缓存
2.检查代码是否一致,看测试环境的代码
mapper.xml 查询结果映射
dto/po.java 映射类,是否缺少属性字段
启动报错
class not found
org.springframework.http.converter.json.MappingJacksonHttpMessageConverter
Spring 4 --> 改为 3
http://blog.csdn.net/you23hai45/article/details/50513164
eclispe svn 同步代码异常
eclipse 选中项目 team --> refresh / clean up
<s:if test='#request.operateResult == "1" '>
传递字符串比较失败, 改成 匹配 双引号的 1
JSP function
<s:property value="">
attempted to return null from a method with a primitive return type (double)
mapper中的SQL
select sum(invest) from db
invest 为空 ,异常的意思是返回一个null,并SUM
改正:
select IFNULL(sum(invest),0) FROM DB
No matching bean of type dao found for depende
通常:impl 实现 Service 的接口 ,未添加 @Service 标签
今天的异常是,在 impl 的 class 前添加了 abstract 所以导致了上面的问题
估计是在写方法时,没有写实现,所以系统提示改为 abstract 类,就点击了提示
client error
publish 重新发布工程,在Tomcat 上重新编译
publish:是将你的web程序发布到tomcat服务器上,这样通过浏览器就可以访问你的程序。
clean:是指原先编译到tomcat服务器上的程序,先清除掉,然后再重新编译。
代码变动,需要clean 后才会生效
tomcat 启动 java.lang.ClassNotFoundException
http://blog.csdn.net/lissic_blog/article/details/52125633
http://www.cnblogs.com/zhangcybb/p/4516327.html
Failed to start component [StandardServer[8005]]
右键 --> publish
Error creating bean with name 'userOperationListener': Injection of autowire
接口实现类上,未添加 @Service标签
--------------------------------------------
基础教程
http://blog.csdn.net/chwshuang/article/details/50580718
队列配置:
生产者:
core spring-rabbit.xml
调用队列接口,将数据放入队列
消费者:
core_batch sprint-rabbit.xml
调用接口方法,取出数据处理
exchanger
http://terry0501.iteye.com/blog/2329580
--------------------------------------------
--------------------------------------------
反射
--------------------------------------------
StringUtils.trimToEmpty(val); // 判空,为空返回EMPTY,否则,去掉末尾空格
String userId = StringUtils.EMPTY; // 空
ConvertFactory cf = new DozerConvertFactory(); // 转换
User user1 = cf.convert(user, User.class);
--------------------------------------------
15842 [main] ERROR o.a.s.s.o.a.z.s.NIOServerCnxnFactory - Thread Thread[main,5,main] died
java.lang.IllegalArgumentException: Fields for default already set
at org.apache.storm.topology.OutputFieldsGetter.declareStream(OutputFieldsGetter.java:43) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.topology.OutputFieldsGetter.declare(OutputFieldsGetter.java:34) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.topology.OutputFieldsGetter.declare(OutputFieldsGetter.java:30) ~[storm-core-1.0.1.jar:1.0.1]
at com.jrd.dams.bolt.ActivitySingleCumulateQuotaStatisticsBolt.declareOutputFields(ActivitySingleCumulateQuotaStatisticsBolt.java:76) ~[classes/:?]
at org.apache.storm.topology.TopologyBuilder.getComponentCommon(TopologyBuilder.java:431) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.topology.TopologyBuilder.createTopology(TopologyBuilder.java:119) ~[storm-core-1.0.1.jar:1.0.1]
at com.jrd.dams.topology.ActivitySingleCumulateInvestmentTopology.main(ActivitySingleCumulateInvestmentTopology.java:53) ~[classes/:?]
OutputCollector 的 declareOutputFields 不能设置多个 Fields ?
{ActivitySingleCumulateQuotaStatisticsBolt=com.jrd.dams.bolt.ActivitySingleCumulateQuotaStatisticsBolt@2d3c7941,
ActivityInvestAmountBolt=com.jrd.dams.bolt.ActivityInvestAmountBolt@5d3cb19a,
ActivitySingleCumulateInvestRecordBolt=com.jrd.dams.bolt.ActivitySingleCumulateInvestRecordBolt@3bb02548}
java.lang.IllegalArgumentException: Fields for default already set
at org.apache.storm.topology.OutputFieldsGetter.declareStream(OutputFieldsGetter.java:43) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.topology.OutputFieldsGetter.declare(OutputFieldsGetter.java:34) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.topology.OutputFieldsGetter.declare(OutputFieldsGetter.java:30) ~[storm-core-1.0.1.jar:1.0.1]
at com.jrd.dams.bolt.ActivitySingleCumulateQuotaStatisticsBolt.declareOutputFields(ActivitySingleCumulateQuotaStatisticsBolt.java:81) ~[classes/:?]
at org.apache.storm.topology.TopologyBuilder.getComponentCommon(TopologyBuilder.java:431) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.topology.TopologyBuilder.createTopology(TopologyBuilder.java:119) ~[storm-core-1.0.1.jar:1.0.1]
at com.jrd.dams.topology.ActivitySingleCumulateInvestmentTopology.main(ActivitySingleCumulateInvestmentTopology.java:53) ~[classes/:?]
解决:
1.
tuple 中 使用两个emit
this.collector.emit("");
this.collector.emit("");
@Override
public void declareOutputFields(OutputFieldsDeclarer declarer) {
declarer.declare(new Fields("singleInvestQuotaStatistics");
declarer.declare(new Fields("cumulateInvestQuotaStatistics"));
}
2.
一个bolt可以使用emit(streamId, tuple)把元组分发到多个流,其中参数streamId是一个用来标识流的字符串。然后,你可以在TopologyBuilder决定由哪个流订阅它。
修改:this.collctor.emit("streamId",values) -- 下一个BOLT处理接收数据时通过 tuple.getValuesByFields("streamId")
声明多个Fields 的方式不正确,declarer.declare(new Fields("cumulateInvestQuotaStatistics","singleInvestQuotaStatistics"));
===============================================================================================================================================
28815 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] ERROR c.j.d.d.AutopromoDataSource - hivegetConnection fail!
java.sql.SQLException: Cannot create PoolableConnectionFactory (Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.)
at org.apache.commons.dbcp2.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:2294) ~[commons-dbcp2-2.1.1.jar:2.1.1]
at org.apache.commons.dbcp2.BasicDataSource.createDataSource(BasicDataSource.java:2039) ~[commons-dbcp2-2.1.1.jar:2.1.1]
at org.apache.commons.dbcp2.BasicDataSource.getConnection(BasicDataSource.java:1533) ~[commons-dbcp2-2.1.1.jar:2.1.1]
at com.jrd.dams.dao.AutopromoDataSource.getConnection(AutopromoDataSource.java:57) [classes/:?]
at com.jrd.dams.dao.EventMatchResultDao.selectEventMatchResultLastCreateTime(EventMatchResultDao.java:191) [classes/:?]
at com.jrd.dams.service.ActivitySingleCumulateInvestmentService.selectEventMatchResultLastCreateTime(ActivitySingleCumulateInvestmentService.java:80) [classes/:?]
at com.jrd.dams.spout.ActivitySingleCumulateInvestmentSpout.nextTuple(ActivitySingleCumulateInvestmentSpout.java:81) [classes/:?]
at org.apache.storm.daemon.executor$fn__7885$fn__7900$fn__7931.invoke(executor.clj:645) [storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.util$async_loop$fn__625.invoke(util.clj:484) [storm-core-1.0.1.jar:1.0.1]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_79]
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.7.0_79]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) ~[?:1.7.0_79]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.7.0_79]
at java.lang.reflect.Constructor.newInstance(Constructor.java:526) ~[?:1.7.0_79]
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411) ~[mysql-connector-java-5.1.26.jar:?]
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1121) ~[mysql-connector-java-5.1.26.jar:?]
at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:357) ~[mysql-connector-java-5.1.26.jar:?]
at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2482) ~[mysql-connector-java-5.1.26.jar:?]
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2519) ~[mysql-connector-java-5.1.26.jar:?]
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2304) ~[mysql-connector-java-5.1.26.jar:?]
at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:834) ~[mysql-connector-java-5.1.26.jar:?]
at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:47) ~[mysql-connector-java-5.1.26.jar:?]
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.7.0_79]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) ~[?:1.7.0_79]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.7.0_79]
at java.lang.reflect.Constructor.newInstance(Constructor.java:526) ~[?:1.7.0_79]
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411) ~[mysql-connector-java-5.1.26.jar:?]
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:416) ~[mysql-connector-java-5.1.26.jar:?]
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:346) ~[mysql-connector-java-5.1.26.jar:?]
at org.apache.commons.dbcp2.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:39) ~[commons-dbcp2-2.1.1.jar:2.1.1]
at org.apache.commons.dbcp2.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:256) ~[commons-dbcp2-2.1.1.jar:2.1.1]
at org.apache.commons.dbcp2.BasicDataSource.validateConnectionFactory(BasicDataSource.java:2304) ~[commons-dbcp2-2.1.1.jar:2.1.1]
at org.apache.commons.dbcp2.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:2290) ~[commons-dbcp2-2.1.1.jar:2.1.1]
... 10 more
Caused by: java.net.ConnectException: Connection timed out: connect
at java.net.DualStackPlainSocketImpl.connect0(Native Method) ~[?:1.7.0_79]
at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:79) ~[?:1.7.0_79]
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) ~[?:1.7.0_79]
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) ~[?:1.7.0_79]
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) ~[?:1.7.0_79]
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172) ~[?:1.7.0_79]
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[?:1.7.0_79]
at java.net.Socket.connect(Socket.java:579) ~[?:1.7.0_79]
at java.net.Socket.connect(Socket.java:528) ~[?:1.7.0_79]
at java.net.Socket.<init>(Socket.java:425) ~[?:1.7.0_79]
at java.net.Socket.<init>(Socket.java:241) ~[?:1.7.0_79]
at com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:259) ~[mysql-connector-java-5.1.26.jar:?]
at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:307) ~[mysql-connector-java-5.1.26.jar:?]
at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2482) ~[mysql-connector-java-5.1.26.jar:?]
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2519) ~[mysql-connector-java-5.1.26.jar:?]
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2304) ~[mysql-connector-java-5.1.26.jar:?]
at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:834) ~[mysql-connector-java-5.1.26.jar:?]
at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:47) ~[mysql-connector-java-5.1.26.jar:?]
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.7.0_79]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) ~[?:1.7.0_79]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.7.0_79]
at java.lang.reflect.Constructor.newInstance(Constructor.java:526) ~[?:1.7.0_79]
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411) ~[mysql-connector-java-5.1.26.jar:?]
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:416) ~[mysql-connector-java-5.1.26.jar:?]
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:346) ~[mysql-connector-java-5.1.26.jar:?]
at org.apache.commons.dbcp2.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:39) ~[commons-dbcp2-2.1.1.jar:2.1.1]
at org.apache.commons.dbcp2.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:256) ~[commons-dbcp2-2.1.1.jar:2.1.1]
at org.apache.commons.dbcp2.BasicDataSource.validateConnectionFactory(BasicDataSource.java:2304) ~[commons-dbcp2-2.1.1.jar:2.1.1]
at org.apache.commons.dbcp2.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:2290) ~[commons-dbcp2-2.1.1.jar:2.1.1]
... 10 more
28817 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] INFO c.j.d.d.AutopromoDataSource - Setting up data source.
28817 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] INFO c.j.d.d.AutopromoDataSource - Done.
28817 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] ERROR c.j.d.d.EventMatchResultDao - 查询 event_match_result 表中最大的创建时间--查询异常
28817 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] ERROR c.j.d.d.EventMatchResultDao - null
java.lang.NullPointerException
at com.jrd.dams.dao.EventMatchResultDao.selectEventMatchResultLastCreateTime(EventMatchResultDao.java:192) [classes/:?]
at com.jrd.dams.service.ActivitySingleCumulateInvestmentService.selectEventMatchResultLastCreateTime(ActivitySingleCumulateInvestmentService.java:80) [classes/:?]
at com.jrd.dams.spout.ActivitySingleCumulateInvestmentSpout.nextTuple(ActivitySingleCumulateInvestmentSpout.java:81) [classes/:?]
at org.apache.storm.daemon.executor$fn__7885$fn__7900$fn__7931.invoke(executor.clj:645) [storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.util$async_loop$fn__625.invoke(util.clj:484) [storm-core-1.0.1.jar:1.0.1]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_79]
28818 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] ERROR o.a.s.util - Async loop died!
java.lang.NullPointerException
at java.util.Calendar.setTime(Calendar.java:1106) ~[?:1.7.0_79]
at com.jrd.dams.spout.ActivitySingleCumulateInvestmentSpout.nextTuple(ActivitySingleCumulateInvestmentSpout.java:84) ~[classes/:?]
at org.apache.storm.daemon.executor$fn__7885$fn__7900$fn__7931.invoke(executor.clj:645) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.util$async_loop$fn__625.invoke(util.clj:484) [storm-core-1.0.1.jar:1.0.1]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_79]
28824 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] ERROR o.a.s.d.executor -
java.lang.NullPointerException
at java.util.Calendar.setTime(Calendar.java:1106) ~[?:1.7.0_79]
at com.jrd.dams.spout.ActivitySingleCumulateInvestmentSpout.nextTuple(ActivitySingleCumulateInvestmentSpout.java:84) ~[classes/:?]
at org.apache.storm.daemon.executor$fn__7885$fn__7900$fn__7931.invoke(executor.clj:645) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.util$async_loop$fn__625.invoke(util.clj:484) [storm-core-1.0.1.jar:1.0.1]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_79]
28853 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] ERROR o.a.s.util - Halting process: ("Worker died")
java.lang.RuntimeException: ("Worker died")
at org.apache.storm.util$exit_process_BANG_.doInvoke(util.clj:341) [storm-core-1.0.1.jar:1.0.1]
at clojure.lang.RestFn.invoke(RestFn.java:423) [clojure-1.7.0.jar:?]
at org.apache.storm.daemon.worker$fn__8554$fn__8555.invoke(worker.clj:761) [storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.daemon.executor$mk_executor_data$fn__7773$fn__7774.invoke(executor.clj:271) [storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.util$async_loop$fn__625.invoke(util.clj:494) [storm-core-1.0.1.jar:1.0.1]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_79]
解决:
1.jdbc 的 Connection 获取的URL 地址不正确
2.SQL语句中包含 ;
======================================================================================================================================
7558 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] ERROR o.a.s.util - Async loop died!
java.lang.ClassCastException: java.util.Date cannot be cast to java.sql.Date
at com.jrd.dams.dao.EventMatchResultDao.selectEventMatchResultByCreateTime(EventMatchResultDao.java:81) ~[classes/:?]
at com.jrd.dams.service.ActivitySingleCumulateInvestmentService.selectSingleInvestEventMatchResult(ActivitySingleCumulateInvestmentService.java:54) ~[classes/:?]
at com.jrd.dams.spout.ActivitySingleCumulateInvestmentSpout.dealSingleInvestEventMatchResult(ActivitySingleCumulateInvestmentSpout.java:99) ~[classes/:?]
at com.jrd.dams.spout.ActivitySingleCumulateInvestmentSpout.nextTuple(ActivitySingleCumulateInvestmentSpout.java:88) ~[classes/:?]
at org.apache.storm.daemon.executor$fn__7885$fn__7900$fn__7931.invoke(executor.clj:645) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.util$async_loop$fn__625.invoke(util.clj:484) [storm-core-1.0.1.jar:1.0.1]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_79]
7558 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] ERROR o.a.s.d.executor -
java.lang.ClassCastException: java.util.Date cannot be cast to java.sql.Date
at com.jrd.dams.dao.EventMatchResultDao.selectEventMatchResultByCreateTime(EventMatchResultDao.java:81) ~[classes/:?]
at com.jrd.dams.service.ActivitySingleCumulateInvestmentService.selectSingleInvestEventMatchResult(ActivitySingleCumulateInvestmentService.java:54) ~[classes/:?]
at com.jrd.dams.spout.ActivitySingleCumulateInvestmentSpout.dealSingleInvestEventMatchResult(ActivitySingleCumulateInvestmentSpout.java:99) ~[classes/:?]
at com.jrd.dams.spout.ActivitySingleCumulateInvestmentSpout.nextTuple(ActivitySingleCumulateInvestmentSpout.java:88) ~[classes/:?]
at org.apache.storm.daemon.executor$fn__7885$fn__7900$fn__7931.invoke(executor.clj:645) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.util$async_loop$fn__625.invoke(util.clj:484) [storm-core-1.0.1.jar:1.0.1]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_79]
7574 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] ERROR o.a.s.util - Halting process: ("Worker died")
java.lang.RuntimeException: ("Worker died")
at org.apache.storm.util$exit_process_BANG_.doInvoke(util.clj:341) [storm-core-1.0.1.jar:1.0.1]
at clojure.lang.RestFn.invoke(RestFn.java:423) [clojure-1.7.0.jar:?]
at org.apache.storm.daemon.worker$fn__8554$fn__8555.invoke(worker.clj:761) [storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.daemon.executor$mk_executor_data$fn__7773$fn__7774.invoke(executor.clj:271) [storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.util$async_loop$fn__625.invoke(util.clj:494) [storm-core-1.0.1.jar:1.0.1]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_79]
1.解决
preparedStatement = connection.prepareStatement(SEL_EVENT_MATCH_RESULT_BY_TIME_MATCH_RESULT);
preparedStatement.setByte(1, matchResult);
preparedStatement.setString(2, actionType);
preparedStatement.setDate(3, (java.sql.Date) startDateTime);
preparedStatement.setDate(4, (java.sql.Date) endDateTime);
预执行SQL语句,setDate date类型为java.sql.Date 与 java.util.Date 不属于一个类,故不能强制转换
java.sql.Date startDateTimeSQL = new java.sql.Date(startDateTime.getTime());
做一个桥接转换
======================================================================================================================================
258773 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] INFO c.j.d.d.EventMatchResultDao - select event_match_result params is actionType = invest
258853 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] INFO c.j.d.s.ActivitySingleCumulateInvestmentSpout - cumulateInvestResultList is null , startDateTime = 2017-01-02T23:55:00.000+0800 , endDateTime = 2017-01-03T00:00:00.000+0800
258853 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] INFO c.j.d.s.ActivitySingleCumulateInvestmentSpout - 单笔投资或累计投资的最后一条数据的创建时间为空
258854 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] INFO c.j.d.s.ActivitySingleCumulateInvestmentSpout - 查询非累投数据,查询条件,matchResult=1,actionType=invest,startDateTime=2017-01-02T23:55:00.000+0800,endDateTime=2017-01-03T00:00:00.000+0800,accuInvestFlag=0
258854 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] INFO c.j.d.d.EventMatchResultDao - select event_match_result params is matchResult = 1, actionType = invest
258938 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] INFO c.j.d.s.ActivitySingleCumulateInvestmentSpout - singleInvestResultList is null , startDateTime = 2017-01-02T23:55:00.000+0800 , endDateTime = 2017-01-03T00:00:00.000+0800
258938 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] INFO c.j.d.s.ActivitySingleCumulateInvestmentSpout - 查询非累投数据,查询条件,actionType=invest,startDateTime=2017-01-02T23:55:00.000+0800,endDateTime=2017-01-03T00:00:00.000+0800
258938 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] INFO c.j.d.d.EventMatchResultDao - select event_match_result params is actionType = invest
1.解决
初次查询获取最大创建时间,返回值为Date 类型,而非 DateTime
1、从结果集中取得日期部分
resultSet.getDate(); --2013-01-07
2、从结果集中取得时间部分
resultSet.getTime() --22:08:09
3、从结果集中同时得到日期和时间
resultSet.getTimestamp(); --2013-01-07 23:08:09
2.传参时
prepareStatement.setDate() ; // 始终为Date 类型的数据 2017-01-03 ,改为 setTimeStamp
3.
public static List<EventMatchResult> dealSelectEventMatchResult(ResultSet resultSet) throws SQLException{
List<EventMatchResult> eventMatchResultList = new Vector<EventMatchResult>(100);
while(resultSet.next()){
EventMatchResult eventMatchResult = new EventMatchResult();
eventMatchResult.setBizId(resultSet.getString(1));
eventMatchResult.setUserId(resultSet.getString(2));
eventMatchResult.setMainEventId(resultSet.getInt(3));
eventMatchResult.setEventConditionId(resultSet.getInt(4));
eventMatchResult.setMainEventName(resultSet.getString(5));
eventMatchResult.setEventMatchResultId(resultSet.getInt(6));
eventMatchResult.setAwardsHBids(resultSet.getString(7));
eventMatchResult.setMatchResult(resultSet.getByte(8));
eventMatchResult.setCreateTime(resultSet.getDate(9));
eventMatchResult.setAccInvestFlag(resultSet.getByte(10));
eventMatchResult.setAccInvestAmount(resultSet.getBigDecimal(11));
eventMatchResult.setCouponAmounts(resultSet.getBigDecimal(12));
eventMatchResultList.add(eventMatchResult);
}
return eventMatchResultList ;
}
将resultSet传递到一个方法中时,resultSet.next()始终为FALSE,
解决方式,修改方法,接收最终获取到的结果,进行组合
if(resultSet.next()){
resultSet.previous();
while(resultSet.next()){
activeTypeList.add(resultSet.getString(1));
}
}
解决:
result.next 游标下移一位,判断结束后,应将指针上移一位,否则,while判断时会丢失结果
==========================================================================================================================================
26735 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] ERROR c.j.d.d.HBRelativeDataDao - null
java.sql.SQLFeatureNotSupportedException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.7.0_79]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) ~[?:1.7.0_79]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.7.0_79]
at java.lang.reflect.Constructor.newInstance(Constructor.java:526) ~[?:1.7.0_79]
at java.lang.Class.newInstance(Class.java:379) ~[?:1.7.0_79]
at com.mysql.jdbc.SQLError.notImplemented(SQLError.java:1334) ~[mysql-connector-java-5.1.26.jar:?]
at com.mysql.jdbc.JDBC4Connection.createArrayOf(JDBC4Connection.java:56) ~[mysql-connector-java-5.1.26.jar:?]
at org.apache.commons.dbcp2.DelegatingConnection.createArrayOf(DelegatingConnection.java:844) ~[commons-dbcp2-2.1.1.jar:2.1.1]
at org.apache.commons.dbcp2.DelegatingConnection.createArrayOf(DelegatingConnection.java:844) ~[commons-dbcp2-2.1.1.jar:2.1.1]
at com.jrd.dams.dao.HBRelativeDataDao.selectJRHBPkgDefine(HBRelativeDataDao.java:47) [classes/:?]
at com.jrd.dams.service.ActivitySingleCumulateInvestmentService.selectJRHBPkgDefine(ActivitySingleCumulateInvestmentService.java:145) [classes/:?]
at com.jrd.dams.service.ActivitySingleCumulateInvestmentService.dealEventMatchResultList(ActivitySingleCumulateInvestmentService.java:244) [classes/:?]
at com.jrd.dams.spout.ActivitySingleCumulateInvestmentSpout.dealSingleInvestEventMatchResult(ActivitySingleCumulateInvestmentSpout.java:104) [classes/:?]
at com.jrd.dams.spout.ActivitySingleCumulateInvestmentSpout.nextTuple(ActivitySingleCumulateInvestmentSpout.java:88) [classes/:?]
at org.apache.storm.daemon.executor$fn__7885$fn__7900$fn__7931.invoke(executor.clj:645) [storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.util$async_loop$fn__625.invoke(util.clj:484) [storm-core-1.0.1.jar:1.0.1]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_79]
46057 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] INFO c.j.d.s.ActivitySingleCumulateInvestmentSpout - selectJRHBPkgDefine select result is null , hbids = 231602173,231602298,231602331
46069 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] INFO c.j.d.s.ActivitySingleCumulateInvestmentSpout - 查询非累投数据,查询条件,actionType=invest,startDateTime=2017-01-03T12:39:45.000+0800,endDateTime=2017-01-03T12:44:45.000+0800
SQL Array in(?) 批量传递参数,封装到 array 中,报错异常
不支持此种用法
4807 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] WARN o.a.s.s.o.a.z.s.NIOServerCnxn - caught end of stream exception
org.apache.storm.shade.org.apache.zookeeper.server.ServerCnxn$EndOfStreamException: Unable to read additional data from client sessionid 0x15aa171b46e000e, likely client has closed socket
at org.apache.storm.shade.org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228) [storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.shade.org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208) [storm-core-1.0.1.jar:1.0.1]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_79]
4807 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] INFO o.a.s.s.o.a.z.s.NIOServerCnxn - Closed socket connection for client /127.0.0.1:52159 which had sessionid 0x15aa171b46e000e
116172 [Thread-15-ActivitySingleCumulateInvestRecordBolt-executor[2 2]] ERROR o.a.s.util - Async loop died!
java.lang.RuntimeException: java.lang.NullPointerException
at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:452) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:418) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.daemon.executor$fn__7953$fn__7966$fn__8019.invoke(executor.clj:847) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.util$async_loop$fn__625.invoke(util.clj:484) [storm-core-1.0.1.jar:1.0.1]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_79]
Caused by: java.lang.NullPointerException
at com.jrd.dams.dao.P2PTdInvestmentInvestDao.saveP2pTdInvestmentInvest(P2PTdInvestmentInvestDao.java:192) ~[classes/:?]
at com.jrd.dams.bolt.ActivitySingleCumulateInvestRecordBolt.dealEventMatchResultData(ActivitySingleCumulateInvestRecordBolt.java:95) ~[classes/:?]
at com.jrd.dams.bolt.ActivitySingleCumulateInvestRecordBolt.dealSingleInvestEventMatchResult(ActivitySingleCumulateInvestRecordBolt.java:68) ~[classes/:?]
at com.jrd.dams.bolt.ActivitySingleCumulateInvestRecordBolt.execute(ActivitySingleCumulateInvestRecordBolt.java:57) ~[classes/:?]
at org.apache.storm.daemon.executor$fn__7953$tuple_action_fn__7955.invoke(executor.clj:728) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.daemon.executor$mk_task_receiver$fn__7874.invoke(executor.clj:461) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.disruptor$clojure_handler$reify__7390.onEvent(disruptor.clj:40) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:439) ~[storm-core-1.0.1.jar:1.0.1]
... 6 more
116172 [Thread-15-ActivitySingleCumulateInvestRecordBolt-executor[2 2]] ERROR o.a.s.d.executor -
java.lang.RuntimeException: java.lang.NullPointerException
at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:452) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:418) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.daemon.executor$fn__7953$fn__7966$fn__8019.invoke(executor.clj:847) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.util$async_loop$fn__625.invoke(util.clj:484) [storm-core-1.0.1.jar:1.0.1]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_79]
Caused by: java.lang.NullPointerException
at com.jrd.dams.dao.P2PTdInvestmentInvestDao.saveP2pTdInvestmentInvest(P2PTdInvestmentInvestDao.java:192) ~[classes/:?]
at com.jrd.dams.bolt.ActivitySingleCumulateInvestRecordBolt.dealEventMatchResultData(ActivitySingleCumulateInvestRecordBolt.java:95) ~[classes/:?]
at com.jrd.dams.bolt.ActivitySingleCumulateInvestRecordBolt.dealSingleInvestEventMatchResult(ActivitySingleCumulateInvestRecordBolt.java:68) ~[classes/:?]
at com.jrd.dams.bolt.ActivitySingleCumulateInvestRecordBolt.execute(ActivitySingleCumulateInvestRecordBolt.java:57) ~[classes/:?]
at org.apache.storm.daemon.executor$fn__7953$tuple_action_fn__7955.invoke(executor.clj:728) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.daemon.executor$mk_task_receiver$fn__7874.invoke(executor.clj:461) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.disruptor$clojure_handler$reify__7390.onEvent(disruptor.clj:40) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:439) ~[storm-core-1.0.1.jar:1.0.1]
... 6 more
116207 [Thread-15-ActivitySingleCumulateInvestRecordBolt-executor[2 2]] ERROR o.a.s.util - Halting process: ("Worker died")
java.lang.RuntimeException: ("Worker died")
at org.apache.storm.util$exit_process_BANG_.doInvoke(util.clj:341) [storm-core-1.0.1.jar:1.0.1]
at clojure.lang.RestFn.invoke(RestFn.java:423) [clojure-1.7.0.jar:?]
at org.apache.storm.daemon.worker$fn__8554$fn__8555.invoke(worker.clj:761) [storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.daemon.executor$mk_executor_data$fn__7773$fn__7774.invoke(executor.clj:271) [storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.util$async_loop$fn__625.invoke(util.clj:494) [storm-core-1.0.1.jar:1.0.1]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_79]
-----------------------------------------------------
304134 [ConsumerFetcherThread-dams-pc-group_DA-20161216IVSI-1489474729051-1e4f2fa1-0-2] INFO k.c.SimpleConsumer - Reconnect due to error:
java.nio.channels.ClosedChannelException
at sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:219) ~[?:1.7.0_79]
at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103) ~[?:1.7.0_79]
at java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385) ~[?:1.7.0_79]
at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:81) ~[kafka-clients-0.10.0.0.jar:?]
at kafka.network.BlockingChannel.readCompletely(BlockingChannel.scala:129) ~[kafka_2.11-0.10.0.0.jar:?]
at kafka.network.BlockingChannel.receive(BlockingChannel.scala:120) ~[kafka_2.11-0.10.0.0.jar:?]
at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:86) [kafka_2.11-0.10.0.0.jar:?]
at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:83) [kafka_2.11-0.10.0.0.jar:?]
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:132) [kafka_2.11-0.10.0.0.jar:?]
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:132) [kafka_2.11-0.10.0.0.jar:?]
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:132) [kafka_2.11-0.10.0.0.jar:?]
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) [kafka_2.11-0.10.0.0.jar:?]
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:131) [kafka_2.11-0.10.0.0.jar:?]
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:131) [kafka_2.11-0.10.0.0.jar:?]
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:131) [kafka_2.11-0.10.0.0.jar:?]
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) [kafka_2.11-0.10.0.0.jar:?]
at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:130) [kafka_2.11-0.10.0.0.jar:?]
at kafka.consumer.ConsumerFetcherThread.fetch(ConsumerFetcherThread.scala:108) [kafka_2.11-0.10.0.0.jar:?]
at kafka.consumer.ConsumerFetcherThread.fetch(ConsumerFetcherThread.scala:29) [kafka_2.11-0.10.0.0.jar:?]
at kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:107) [kafka_2.11-0.10.0.0.jar:?]
at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:98) [kafka_2.11-0.10.0.0.jar:?]
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63) [kafka_2.11-0.10.0.0.jar:?]
304135 [ConsumerFetcherThread-dams-pc-group_DA-20161216IVSI-1489474729051-1e4f2fa1-0-2] INFO k.c.ConsumerFetcherThread - [ConsumerFetcherThread-dams-pc-group_DA-20161216IVSI-1489474729051-1e4f2fa1-0-2], Stopped
304135 [dams-pc-group_DA-20161216IVSI-1489474729051-1e4f2fa1_watcher_executor] INFO k.c.ConsumerFetcherThread - [ConsumerFetcherThread-dams-pc-group_DA-20161216IVSI-1489474729051-1e4f2fa1-0-2], Shutdown completed
304136 [dams-pc-group_DA-20161216IVSI-1489474729051-1e4f2fa1_watcher_executor] INFO k.c.ConsumerFetcherManager - [ConsumerFetcherManager-1489474729145] All connections stopped
304137 [dams-pc-group_DA-20161216IVSI-1489474729051-1e4f2fa1_watcher_executor] INFO k.c.ZookeeperConsumerConnector - [dams-pc-group_DA-20161216IVSI-1489474729051-1e4f2fa1], Cleared all relevant queues for this fetcher
304137 [dams-pc-group_DA-20161216IVSI-1489474729051-1e4f2fa1_watcher_executor] INFO k.c.ZookeeperConsumerConnector - [dams-pc-group_DA-20161216IVSI-1489474729051-1e4f2fa1], Cleared the data chunks in all the consumer message iterators
304137 [dams-pc-group_DA-20161216IVSI-1489474729051-1e4f2fa1_watcher_executor] INFO k.c.ZookeeperConsumerConnector - [dams-pc-group_DA-20161216IVSI-1489474729051-1e4f2fa1], Committing all offsets after clearing the fetcher queues
304137 [dams-pc-group_DA-20161216IVSI-1489474729051-1e4f2fa1_watcher_executor] INFO k.c.ZookeeperConsumerConnector - [dams-pc-group_DA-20161216IVSI-1489474729051-1e4f2fa1], Releasing partition ownership
304205 [dams-pc-group_DA-20161216IVSI-1489474729051-1e4f2fa1_watcher_executor] INFO k.c.RangeAssignor - Consumer dams-pc-group_DA-20161216IVSI-1489474729051-1e4f2fa1 rebalancing the following partitions: ArrayBuffer(0, 1, 2) for topic pc-topic with consumers: List(dams-pc-group_DA-20161216IVSI-1489474729051-1e4f2fa1-0)
客户端地址配置不正确
Caused by: java.lang.ClassNotFoundException: com.jrd.framework.log.LoggerFactory
at java.net.URLClassLoader$1.run(URLClassLoader.java:366) ~[?:1.7.0_79]
at java.net.URLClassLoader$1.run(URLClassLoader.java:355) ~[?:1.7.0_79]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.7.0_79]
at java.net.URLClassLoader.findClass(URLClassLoader.java:354) ~[?:1.7.0_79]
at java.lang.ClassLoader.loadClass(ClassLoader.java:425) ~[?:1.7.0_79]
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) ~[?:1.7.0_79]
at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ~[?:1.7.0_79]
... 14 more
未引入该JAR包
48767 [Thread-15-ActivityVisitPersonCountBolt-executor[2 2]] ERROR c.j.f.c.CacheClient - Failure! Set failed! key:jrd_re_71,value:0,expertime:-144425
redis.clients.jedis.exceptions.JedisDataException: ERR invalid expire time in setex
at redis.clients.jedis.Protocol.processError(Protocol.java:117) ~[jedis-2.8.1.jar:?]
at redis.clients.jedis.Protocol.process(Protocol.java:151) ~[jedis-2.8.1.jar:?]
at redis.clients.jedis.Protocol.read(Protocol.java:205) ~[jedis-2.8.1.jar:?]
at redis.clients.jedis.Connection.readProtocolWithCheckingBroken(Connection.java:297) ~[jedis-2.8.1.jar:?]
at redis.clients.jedis.Connection.getStatusCodeReply(Connection.java:196) ~[jedis-2.8.1.jar:?]
at redis.clients.jedis.Jedis.setex(Jedis.java:387) ~[jedis-2.8.1.jar:?]
at com.jrd.framework.cache.ShardRedisClient.setKV(ShardRedisClient.java:185) ~[jr-cache-1.2-SNAPSHOT.jar:?]
at com.jrd.framework.cache.CacheClient.set(CacheClient.java:49) [jr-cache-1.2-SNAPSHOT.jar:?]
at com.jrd.framework.cache.CacheUtils.set(CacheUtils.java:650) [jr-cache-1.2-SNAPSHOT.jar:?]
at com.jrd.dams.bolt.ActivityVisitPersonCountBolt.dealActivityVisitPersonCount(ActivityVisitPersonCountBolt.java:155) [classes/:?]
at com.jrd.dams.bolt.ActivityVisitPersonCountBolt.dealActivityVisitPersonCount(ActivityVisitPersonCountBolt.java:133) [classes/:?]
at com.jrd.dams.bolt.ActivityVisitPersonCountBolt.execute(ActivityVisitPersonCountBolt.java:104) [classes/:?]
at org.apache.storm.daemon.executor$fn__7953$tuple_action_fn__7955.invoke(executor.clj:728) [storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.daemon.executor$mk_task_receiver$fn__7874.invoke(executor.clj:461) [storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.disruptor$clojure_handler$reify__7390.onEvent(disruptor.clj:40) [storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:439) [storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:418) [storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.daemon.executor$fn__7953$fn__7966$fn__8019.invoke(executor.clj:847) [storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.util$async_loop$fn__625.invoke(util.clj:484) [storm-core-1.0.1.jar:1.0.1]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_79]
设置不合法的失效时间
StringBuffer sb = new StringBuffer(activityId);
sb.append("-").append(userId);
StringBuffer(参数为Integer类型)
.toString()时,activityId 未能输出
处理:需要将activityId转化为字符串
原因:
不同参数的构造函数
public StringBuffer(int capacity) {
super(capacity);
}
public StringBuffer(String str) {
super(str.length() + 16);
append(str);
}
/**
* 放入Google Guava 定义的缓存中
* @param key
* @param value
*/
public void putGoogleGuavaCache(String key , Object value){
GoogleGuavaCacheUtil.invalidTime = GOOGLE_GUAVA_INVALID_TIME_NUM_DEFAULT ;
GoogleGuavaCacheUtil.timeUnit = GOOGLE_GUAVA_INVALID_TIME_TIME_UNIT_DEFAULT ;
}
读取Kafka中的数据失败,
private transient KafkaStream<byte[], byte[]> kafkaStream ;
原因在于 transient 关键字,不适用序列化,而kafka数据的传输即通过序列化实现(暂时这样理解)
去掉这个关键字 ,程序正常
写入到kafka的数据将写到磁盘(写入时需要序列化)
kafka
topic 与 group
两个消费模式,队列与订阅
队列:单一消费
订阅:广播方式发送,每个消费者维护一个读取位置;
消息维护时间为设置的失效时间,到失效时间无论是否读取,均丢失
若消费者同组,为队列,否则,订阅
Spout
多个Spout向同一个Bolt发送数据
需设置多个Fields在Spout中
Bolt需通过不同的FieldsName进行接收
优化:
添加流名称,Spout中设置相同的FieldsName
修改两处
1.Spout中
declare.declareFields(new Fields("name"))
改为
declarer.declareStream(ACTIVITY_VISIT_PERSON_COUNT_WEB_STREAM_ID,new Fields("userId"));
2.Topology中
Bolt接收数据
builder.setBolt("ActivitySingleCumulateInvestRecordBolt", new ActivitySingleCumulateInvestRecordBolt(), parallelismSpout).shuffleGrouping("ActivitySingleCumulateInvestmentSpout");
改为
builder.setBolt("ActivityInvestAmountBolt", new ActivityInvestAmountBolt(),parallelismSpout)
.shuffleGrouping("ActivitySingleCumulateInvestRecordBolt",ActivitySingleCumulateInvestRecordBolt.MAIN_EVENT_ID_STREAM)
.shuffleGrouping("ActiveDataStatisticsStoringBDBolt",ActiveDataStatisticsStoringBDBolt.MAIN_EVENT_ID_STREAM)
.shuffleGrouping("ActiveDataStatisticsStoringHbBolt",ActiveDataStatisticsStoringHbBolt.MAIN_EVENT_ID_STREAM);
* 1.接收KAFAK中ANDROID与IOS端的数据
* topic 数据标签
* group 与AppDataResolvingAndStoringSpout 不同
* 不同组,为订阅者模式
* 相同组,为队列,单一消费者,会与AppDataResolvingAndStoringSpout消费者同时消费,
* 会出现读取数据为空的情况
* 2.发送数据
* streamId : ActivityVisitPersonCountAppSpout-StreamId
* fields : userId
* 注:
* fields 不允许重复;但Bolt接收数据时为方便按照FieldsName进行接收,需两个Spout中设置同名的Fields
* 通过不同的streamId区分Spout中同名的Fields
----------------------------------------------------------------
重启前,在stormUi 上将该Spout kill 掉,等待完成后
maven打包,右键 maven -- build clean run
cd jar 放的位置 /data/storm/jar
rz 上传打包文件
更改名称 将当前的上传文件更名为 storm.jar
mv storm... storm.jar
启动命令 /usr/local/apache-storm-1.0.1/bin/storm jar /data/storm/jar/storm.jar com.jrd.dams.topology.ActivityEffectAnalysisTopology ActivityEffectAnalysisTopologyTest
----------------------------------------------------------------
https://www.junrongdai.com/invest/index/111497
index/(\d)+
https://m.junrongdai.com/#path=views/project/details/?pid=106548
(\?pid=)?(\d)+
[(index/)|(\?pid=)?](\d)+
缓存的使用,只是减少数据访问的压力,考虑缓存失效的情况--缓存失效,查询数据表中数据
减少数据查询次数,将查询接口改写到有必要进行查询的地方
try catch 即使代码未显示的抛出异常,仍要捕获,避免因为意想不到的异常导致程序结束
----------------------------------------------------------------
more +/str work.log
nf 向下移动n屏,其中n是数字。
nb 向上移动n屏,其中n是数字。
/模式 向下查找指定的字符串模式。
n 重复前面查找命令
q 退出
空格 下一屏
ENTER 下一行
查看异常,
more +/ERROR work.log
查找首次出现异常的位置
按n查找后续的异常位置
1.log日志定位
常用 tail -f -n 100 xxx.log ,只能查看最后的日志,不方便查找指定问题
如:巡检,则只关注是否有ERROR的日志出现;或已知出现问题,想定位某个时间点的日志,而不是全部查看
异常信息查看或根据指定字符定位Log
more +/ERROR xxx.log
ENTER 下一行
空格 下一屏
nf 向下移动n屏,其中n是数字。
nb 向上移动n屏,其中n是数字。
n 重复前面查找命令,即查找下一个ERROR的位置
q 退出more 模式
如欲查看 15:30 的日志
more +/15:30 xxx.log
可能匹配到 分钟和秒,按n 查找下一个匹配
2.编辑
vim 编辑
i insert 进入写模式
ESC 退出
: 进入命令行
wq 退出保存
-------------------------------------------------------------
数据操作
JDBC
DButil
Mybatis
优点,有何区别
----------------------------------------------------------
“无论你遇见谁,他都是你生命该出现的人,绝非偶然,他一定会教会你一些什么”。
所以我也相信:“无论我走到哪里,那都是我该去的地方,经历一些我该经历的事,遇见我该遇见的人”
昨天,是一道风景,看见了,模糊了;
时间是一个过客,记住了,遗忘了;生活是一个漏斗,得到了,失去了;世上没有不平的事,只有不平的心。不去怨,不去恨,淡然一切,往事如烟。
人生就是一阵风,起了,没了。
理想就是一盏灯,燃了,灭了。人情就是一阵雨,下了,干了。朋友就是一层云,聚了,散了。闲愁就是一壶酒,醉了,醒了。
寂寞就是一颗星,闪了,灭了。孤独就是一轮月,升了,落了。死亡就是一场梦,累了,睡了。
------------------------------------------
一、Excel 文件上传
0.0 功能实现
参考内容:http://blog.csdn.net/u013871100/article/details/52901996
1.0问题描述
测试过程中,选择一个 Excel 文件上传,版本为 office 2007
1.2异常信息
Exception in thread "main" org.apache.poi.poifs.filesystem.OfficeXmlFileException: The supplied data appears to be in the Office 2007+ XML. You are calling the part of POI that deals with OLE2 Office Documents. You need to call a different part of POI to process this data (eg XSSF instead of HSSF)
at org.apache.poi.poifs.storage.HeaderBlock.<init>(HeaderBlock.java:128)
at org.apache.poi.poifs.storage.HeaderBlock.<init>(HeaderBlock.java:112)
at org.apache.poi.poifs.filesystem.NPOIFSFileSystem.<init>(NPOIFSFileSystem.java:302)
at org.apache.poi.poifs.filesystem.POIFSFileSystem.<init>(POIFSFileSystem.java:86)
1.3异常解释
代码中用HSSFSheet 进行表单获取,以及使用HSSFXXX对行、列内容进行处理
而HSSFSheet 导入时只支持2007版本而不支持其他版本
1.4问题解决
参考内容:http://blog.csdn.net/mmm333zzz/article/details/7962377
HSSFSheet 系列进行文件的处理,只能处理2007版本以下的
XSSFSheet 系列进行文件的处理,处理2007版本
2.0问题描述
测试过程中,获取的数据总数与已经总数不一致,即数据丢失
2.1问题解决
参考内容:http://blog.csdn.net/u013871100/article/details/52901996
row / cell 取值从0 开始
3.0问题描述
上传文件中含有手机号码,对手机号码进行格式校验时异常,检查正则无误,跟踪代码发现接收的数据变成科学计数法
3.1问题解决
参考内容:http://blog.csdn.net/cclovett/article/details/16343615
使用BigDecimal 对数据进行转化
参考内容:http://jingyan.baidu.com/article/0964eca27a39808285f5363c.html
将模板中手机号码的单元格格式,设置为 数字--文本
4.0问题描述
读取文本内容时,出现异常
4.1异常描述
java.lang.IllegalStateException: Cannot get a text value from a numeric cell
at org.apache.poi.xssf.usermodel.XSSFCell.typeMismatch(XSSFCell.java:994) ~[poi-ooxml-3.14.jar:3.14]
at org.apache.poi.xssf.usermodel.XSSFCell.getRichStringCellValue(XSSFCell.java:399) ~[poi-ooxml-3.14.jar:3.14]
at org.apache.poi.xssf.usermodel.XSSFCell.getStringCellValue(XSSFCell.java:351) ~[poi-ooxml-3.14.jar:3.14]
4.2异常解释
试图将Number 转换为String
4.3异常解决
参考内容:http://blog.csdn.net/ysughw/article/details/9288307
进行格式转化
5.0问题描述
取出Map的Values集合,并当做查询参数时异常
5.1异常描述
java.util.HashMap$Values cannot be cast to java.util.Set
5.2异常解决
遍历Map,组装Value
二、文件下载
0.0功能实现
参考内容:
http://meigesir.iteye.com/blog/1539358
http://www.cnblogs.com/ungshow/archive/2009/01/12/1374491.html
1.0问题描述
无法映射访问路径
1.1代码内容
@RequestMapping("excelModelDown")
public void excelModelDown(
// @RequestParam(defaultValue="",required=false)
// 添加此行注释后,请求正常映射
HttpServletResponse response
){
return ;
}
// 在实现中重新定义response 依然无法解决问题
// 取消上述@RequestParam标签即可
2.0问题描述
无法获取放到src/main/resources下的文件
2.1问题解决
resources中的文件打包时也被放在classes目录下。使用this.getClass().getClassLoader().getResource("");
读取src/main/resources 下的文件
获取文件路径 this指代当前类
3.0问题描述
放在SRC的文件为英文名称,下载的文件欲换成中文名称
但更换名称后,下载的文件变为未知文件
注:
response.addHeader("Content-Disposition", "attachment;filename=" + new String(EXCEL_MODEL_CHINESE_NAME.getBytes("gb2312"), "ISO8859-1" )+"."+ext.toLowerCase());
若文件名称不变,则设置"fileName = "+new String(currentFileName.getBytes())
若文件名称变更,setFileName 时,给出 全限定名称,文件名+.+文件类型
下载文件名称乱码问题解决
http://lj830723.iteye.com/blog/1415479
----------------------------------------------------
日拱一卒功不唐捐
行到水穷处坐看云起时
-----------------------------------------------------
http://172.16.204.118:8080/jr-dams-admin/quotaDefine/operationQuotaExcelModelDown
http://172.16.204.118:8080/jr-dams-admin/quotaDefine/operationQuotaExcelUpload
{"resultCode":10009,"resultMsg":"上传Excel文件失败 文件扩展名不正确,请确认文件是Excel类型文件,扩展名为.xlsx","data":null,"success":false}
异常信息格式,有异常则提示
1.测试导入异常情况
代码重构与格式化
form 表单提交
<bean id ="multipartResolver" class="org.springframework.web.multipart.commons.CommonsMultipartResolver">
<!--设置编码格式 默认是ISO-8859-1-->
<property name="defaultEncoding" value="utf-8"></property>
<!--设置上传文件的最大字节数-->
<property name="maxUploadSize" value="10485760000"></property>
<!--设置在写入磁盘前内存中可存储的最大字节数 默认是-->
<property name="maxInMemorySize" value="40960"></property>
</bean>
@RequestParam (value="file") MultipartFile file
2.日志操作相关命令
3.安装虚拟机
4.Linux 基础
三、上传
MultipartFile 使用
http://blog.csdn.net/swingpyzf/article/details/20230865
Caused by: java.lang.IllegalArgumentException: Expected MultipartHttpServletRequest: is a MultipartResolver configured?
at org.springframework.util.Assert.notNull(Assert.java:112) ~[spring-core-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.web.method.annotation.RequestParamMethodArgumentResolver.resolveName(RequestParamMethodArgumentResolver.java:168) ~[spring-web-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.web.method.annotation.AbstractNamedValueMethodArgumentResolver.resolveArgument(AbstractNamedValueMethodArgumentResolver.java:88) ~[spring-web-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.web.method.support.HandlerMethodArgumentResolverComposite.resolveArgument(HandlerMethodArgumentResolverComposite.java:78) ~[spring-web-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.web.method.support.InvocableHandlerMethod.getMethodArgumentValues(InvocableHandlerMethod.java:162) ~[spring-web-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:129) ~[spring-web-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:110) ~[spring-webmvc-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:775) ~[spring-webmvc-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:705) ~[spring-webmvc-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85) ~[spring-webmvc-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:959) ~[spring-webmvc-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:893) ~[spring-webmvc-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:965) ~[spring-webmvc-4.1.9.RELEASE.jar:4.1.9.RELEASE]
... 33 common frames omitted
http://blog.csdn.net/jiangyu1013/article/details/60758582
Expected MultipartHttpServletRequest: is a MultipartResolver configured
http://blog.csdn.net/jiangyu1013/article/details/60758582
MultipartFile 接收上传结果为空
<bean id="multipartResolver" class="org.springframework.web.multipart.commons.CommonsMultipartResolver">
<!-- 请求编码格式 -->
<property name="defaultEncoding" value="utf-8"></property>
<!-- 上传文件大小(单位:字节) -->
<property name="maxUploadSize" value="50000000"></property>
<!-- 缓冲区大小(单位:KB) -->
<property name="maxInMemorySize" value="1024"></property>
</bean>
<context:component-scan base-package="com.jrd.dams.admin">
<context:exclude-filter type="regex"
expression="com.jrd.dams.admin.controller.*" />
</context:component-scan>
放在 scan 上,报错,数据库连接异常
appears to have started a thread named [Abandoned connection cleanup thread] but has failed to stop
更换位置后正常
http://www.cnblogs.com/songyunxinQQ529616136/p/6646070.html
接收文件名称与 HTML中上传文件名称相同
// 包含文件名称及扩展名
String fileName = excelFile.getOriginalFilename();
// 只包含文件名,与File 的getFileName方法有区别
String fileName = excelFile.getFileName();
wb = new XSSFWorkbook(excelFile.toString());
XSSFWorkbook 加载 MultipartFile 时异常
http://jingyan.baidu.com/article/11c17a2c073e12f446e39d38.html
wb = new XSSFWorkbook(excelFile.getInputStream());
------------------------------------------------------------------
zeromq
---------------------------------
java.lang.ClassNotFoundException: org.springframework.web.context.ContextLoaderListener
eclipse 加载项目时,项目未带有Spring jar包
打开tomcat 目录下的 webapp 目录,查看此工程 的WEB-INF 是否有lib文件夹
无,所依加载时会找不到JAR包
解决方案:http://blog.csdn.net/tfy1332/article/details/46047473
----------------------------------
httpClient 请求 400
参数用URLencoder 进行处理 URLencoeder.encode("","UTF-8");
----------------------------------
连接池异常
http://blog.csdn.net/wo8553456/article/details/40396401
--------------------------------
1.从库查询,减轻主库压力
2.缓存,不是实时要求的数据,放入缓存
Excel导入异常Cannot get a text value from a numeric cell解决
http://blog.csdn.net/ysughw/article/details/9288307
doubleCheck问题
ABA问题 CAS 加version
乐观锁MySQL
zookeeper 应用
finally return exit(0) 的区别
java 中 int 类型使用 Byte接收可否?
Mybatis xml
foreach 的参数一定要判空,
否则 in () 会报语法错误
ps aux | grep 'jrcore' | grep nginx
jps (root 权限)
jsp 相应进程编号
什么是前后端分离项目?如何实现?
QPS 的概念,每日达到多少,每日的交易额?
数据更新,分布式环境下如何保证数据的一致性
答:同乐观锁。记录version,如何读取记录到的version?分布式环境如何保证读取的 version 是正确的
答:---
Excel 导入导出功能,将导出导入功能写入 core 层 即 service 中,通过HTTP方式访问 core,网上的POI 公共代码示例中需要 RESPONSE 的参数,在将此参数传入到底层时异常
Could not write content: Infinite recursion (*Error) (through reference chain:
map中存放了 response 、 request ,通过 map 传递时出现JSON 转化异常
REST 的 HTTP 访问方式时会出现下面的问题
JRDResponse<PageDto> 接收到的返回值中 result 是 LinkedHashMap 而不是 PO ?
请求接口,接收的返回值为 PageDto 中 List 变为 Element Object[] 数组,数组中为 LinkedHashMap
PageDto List result 没有指定泛型
解释:https://www.cnblogs.com/timlong/p/3916240.html
反序列化时无法确认List中的泛型
List<T> 作为返回结果,不会出现上述的问题
http://blog.csdn.net/wantken/article/details/42643901
ObjectMapper mapper = new ObjectMapper();
try {
PageDTO myObjects =
mapper.readValue(response.getEntity().toString(), new TypeReference<PageDTO>(){});
logger.info("myObjects:"+myObjects);
} catch (JsonParseException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (JsonMappingException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
Life isn't about waiting for the storm to pass, it's about learning to dance in the rain.
生活不是等着暴风雨过去,而是学会在风雨中跳舞。
日落西山你不陪,君临天下你是谁
东山再起你不陪
----------------------------------
前后端分离项目:
接收汉字传值:
本地环境:
sourceName = URLDecoder.decode(sourceName, "UTF-8"); // 不起作用
sourceName = new String(sourceName.getBytes("ISO-8859-1"),"UTF-8"); // 乱码处理
其他环境:
sourceName = URLDecoder.decode(sourceName, "UTF-8"); // 乱码处理
sourceName = new String(sourceName.getBytes("ISO-8859-1"),"UTF-8"); // 执行后汉字乱码
环境问题?编码格式
前端提交的时间,接收时是Long型,需要做转化处理
----------------------------------------
使用软引用做敏感数据的缓存
PO 类 - 敏感数据的实体类
ReferenceQueue<PO> queue 将PO类放入引用队列
SoftReference<PO,queue> 创建PO类的对应的软引用实例
PUT
queue.add(po)
GET
queue 中取值
---------------------------------------
select count(c_user) from
(select user_id as c_user from jr_td_project_invest union select c_user from p2p_td_project_invest ) as count_investor;
-- 改写成:
select count(distinct c_user) from
(select distinct user_id as c_user from jr_td_project_invest union all select distinct c_user from p2p_td_project_invest ) as count_investor;
distinct 取出重复数据
union all 合并不去除重复数据,即会列出所有的值,不进行合并
union 合并且去重
前者:两个大的集合合并去重,再检索
后者:将两个不含重复的数据的集合进行合并不去重,再在合并后的集合中去重
通过explain 查询结果显示,后者 rows :MYSQL认为必须检查的用来返回请求数据的行数
要比前者少了50%
170638
6.1 union与union all的效率
从效率上说,UNION ALL 要比UNION快很多。
UNION ALL只是简单的将两个结果合并后就返回。
这样,如果返回的两个结果集中有重复的数据,
那么返回的结果集就会包含重复的数据了。
UNION在进行表链接后会筛选掉重复的记录,
所以在表链接后会对所产生的结果集进行排序运算,
删除重复的记录再返回结果。
如果可以确认合并的两个结果集中不包含重复的数据的话,
那么就使用UNION ALL。
http://blog.csdn.net/jesseyoung/article/details/40427849
union all是对数据行的合并,
比如说:select name,age,sex,telphone from A union all select name,age,sex,telphone from B,
意思是说把A表中的name,age,sex,telphone和B表中的name,age,sex,telphone字段进行合并,在合并的同时一定要确定字段类型相同,这时无论数据是否相同都会合并到一个表中。而union在进行并的时候,先会执行一下distinct的操作,就是把相同的行给去掉,然后再进行合并。
http://blog.csdn.net/liuxiaohuago/article/details/7075371
distinct 是全表数据的查询,对比全表的数据后得出查询结果
---------------------------------------
JSP 自定义标签库
分页插件
1.pom.xml 引入依赖
2.编写自定义方法继承TagSupport
3.编写对应的*.tld文件
4.web.xml 加载
5.使用
<%@ taglib prefix="urlChangePage" uri="/urlChangePage"%>
<urlChangePage:pages /> pages 为java端setAttribute("pages",pages);
或
<%@ taglib uri="/dateConvert" prefix="ct"%>
<ct:longStr longTime="1314842011312"></ct:longStr>
是否在Web.xml 中配置与文件存放位置有关?
在WEB-INF目录下放置 *.tld文件
http://ldq2010-163-com.iteye.com/blog/1163236
-------------------------------------------
--------------------------------------------
eclipse 使用 server 中 tomcat publish or clean
Publishing failed with multiple errors file not found
原因:项目工程文件删除,但eclipse里面仍显示存在。
解决方案:刷新项目工程,重新部署,发布项目。
工程更新一下就解决了
原因:工程中的文件放生了改变,但eclipse里面显示存在,刷新一下就好使了。
出错原因是因为文件系统不同步,手动刷新一下资源管理器.
在 CSS 下新增 revision 文件夹,再添加子文件夹 about ,在 about 中添加 .css 文件
启动后提示无法找到
解决:将 CSS 文件移动到 revision 文件夹内即可
--------------------------------------------
href = "/about" 链接中无相对域名
href = "about" 相对域名/about
--------------------------------------------
写过中间件、联合索引、开源代码、项目介绍
高并发在线解决方案,非JOB
堆栈溢出,如何处理
常用命令 jps jstack jconsole
xxx class no find
添加build path ,将找不到的类所在的项目加入进来
tinyint(1) 映射 BOOLEAN
tinyint(3) 映射 Byte
----------------------------------------------
SQL: SELECT C_FINANCED FROM p2p_td_project WHERE C_LOAN_END_DATE LIKE ? AND C_FINANCED > 0
### Cause: java.lang.UnsupportedOperationExceptionorg.mybatis.spring.MyBatisSystemException: nested exception is org.apache.ibatis.exceptions.PersistenceException:
### Error querying database. Cause: java.lang.UnsupportedOperationException
like #{xxx} 报错
xxx = 2017-11% java 端拼接
resultType = list 所以出错
----------------------------------------------
1.日志打印输出
log4j
与
logback 均有输出
按照配置的输出日志的目录都有查询到
web.xml 中监听器配置的是 log4j2
http://blog.csdn.net/liaoxiaohua1981/article/details/6760423
那logback 如何起作用
程序中输出日志应该使用?
http://blog.sina.com.cn/s/blog_4adc4b090102vx0z.html
log4j
jrd-log 加载logback.xml ?
log4j 的日志是否打印
logback 的日志打印
jrcore 中
org.slf4j.Logger logger = LoggerFactory.getLogger(this.getClass()); 打印日志在测试环境可查看到
org.apache.log4j.Logger logger = Logger.getLogger(this.getClass()); 打印日志在测试环境未能看到
-------------------------------------------------
2.eclipse open Perspective
透视图是一个包含一系列视图和内容编辑器的可视容器
3.分布式锁、池化
4.ams-log
-------------------------------------------------
北京1.。
标识顺序的字节码出现一次
北 长度4
北京 长度6
北京1.人。
[-2, -1, 83, 23, 78, -84, 0, 49, 0, 46, 78, -70, 48, 2]
当前编辑器默认是大头
public static String bSubstring(String s, int length, int excepLength) {
try {
s = s.replaceAll("\\s*", "");
byte[] bytes = s.getBytes("Unicode"); // Unicode 默认按照UnicodeLittleUnmarked 小头规则,一个字符两个字节表示,高位在第二个字节的位置
int n = 0; // 表示当前的字节数
int i = 2; // 要截取的字节数,从第3个字节开始
// 0 1 2 按照数组下标的顺序取值;前两位为顺序标识,即告知系统当前的编码顺序时小头;系统默认处理?
for (; i < bytes.length && n < length; i++) {
// 奇数位置,如3、5、7等,为UCS2编码中两个字节的第二个字节
// 从 2 开始,奇数位加一,偶数为不为0加一,length 指字节的长度,而不是字符的长度;
// i 是当前的数组下标
if (i % 2 == 1) {
n++; // 在UCS2第二个字节时n加1
} else {
// 当UCS2编码的第一个字节不等于0时,该UCS2字符为汉字,一个汉字算两个字节
if (bytes[i] != 0) {
n++;
}
}
}
// 如果i为奇数时,处理成偶数
if (i % 2 == 1) { // 为奇数即字符串中含有数字字母,偶数个,此处灵活配置,因为两个字符标识一个字符,只是强制判断当前是汉字且只有一位,舍弃
// 该UCS2字符是汉字时,去掉这个截一半的汉字
if (bytes[i - 1] != 0)
i = i - 1;
// 该UCS2字符是字母或数字,则保留该字符
else
i = i + 1;
}
return new String(bytes, 0, i, "Unicode");
} catch (Exception e) {
JrdLogManager.runLog(Level.ERROR, "bSubstring exception :" + e);
return s.replaceAll("\\s*", "").length() > excepLength ? s.replaceAll("\\s*", "").substring(0, excepLength) : s;
}
}
UTF-8 BOM
UTF-8 无BOM
UCS-2 Big Endian
UCS-2 Little Endian
代码单元 Unicode code units
代码点
概念
------------------------------------------------------------------------------------------------------------------------------
Collctions.synchronized
Vector
-----------------------------------
String 为什么使用final 修饰?
https://www.zhihu.com/question/31345592
反例:使用StringBuilder 作为Set 的泛型,因为可变所以可以在Set对KEY进行修改,可能会导致KEY相同,从而破坏了Set中KEY去重的特性
String 线程池机制,在大量使用字符串的场景中节省存储空间,提高效率
String 引用数据类型,可以同其他基本数据类型一样在MAIN方法中调用而不会出现编译问题
不需声明为static 不需实例化之后再调用
声明类变量 static String 可以不使用 final 修饰
http://www.cnblogs.com/ikuman/archive/2013/08/27/3284410.html
主要是为了”安全性“和”效率“的缘故
http://www.cnblogs.com/hellowhy/p/6536590.html
----------------------------------------------------------------------
Mybatis resultType int or Integer
http://blog.csdn.net/xiangjai/article/details/53894466
attempted to return null from a method with a primitive return type (int)
返回类型resultType 为 int 或者 Integer 都可能发生上面的异常
解决方案:
1.IFNULL(XX,0) 给出一个默认值
2.返回一个引用数据类型,在上层添加判断
http://blog.csdn.net/iamlihongwei/article/details/72652384
----------------------------------------------------------------------
logback.xml 配置输出 Mybatis日志
日志输出级别
jrcore 中日志的配置 log4j log4j2 logback 以哪个配置为准?
以logback 为准,配置在系统级别,配置在其他JAR包,jrcore 依赖此JAR包,JAR包进行配置,jr-log
-----------------------------------------------------------------------
M 端接收中文会出现乱码
address = URLDecoder.decode(address, "UTF-8");
foreach 实现原理,底层使用 Iterator ,
remove 时调用了 NEXT
--------------------------------------------------
启动tomcat 内存溢出 eclipse
http://blog.csdn.net/suigaopeng/article/details/26720719
--------------------------------------------------------------------------------
底层 MyBatis 接口无查询结果,但使用SQL直接查询有结果
MyBatis
接口调用返回空值
1.字段部分使用*,导致 表中该字段与PO类之间无法映射
select *
改为 select 具体的字段
2.参数类型不正确 parameterType
参数
java.util.Map
或
map
直接 写Map 不识别
---------------------------------------------------------------------------------
引用本地的 COMMON 工程,在 COMMON 工程中 进行相应代码变更
重启电脑后,eclipse 中进行了初始化,原本引用的工程 common 变为了 jar 包,导致相应的修改失效:
引用 本地common 包,没有引用到 而变为了 公共的 jar 包
build path --> add project --> 添加 COMMON 工程到当前项目中
断点调试时,提示 执行哪一个( .clss .java 两个,两个依赖,一个是添加的项目,一个是引用的JAR包,选择 java 的)
进行调试,服务正常
---------------------------------------------------------------------------------
org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named 'XXXXX' is defined
将 tomcat 中的项目 重新发布 ; clean 或 publish
---------------------------------------------------------------------------------
SVN 合并代码时,若多人开发,以项目为单位进行合并,因为一个类可能多个人都修改过,若某一个版本没有合并,会导致代码冲突
---------------------------------------------------------------------------------
Excel导入异常Cannot get a text value from a numeric cell解决
http://blog.csdn.net/ysughw/article/details/9288307
---------------------------------------------------------------------------------
文件下载:路径读取,windows 与 linux 不同,配置为实际机器上的路径
File textf = new File("/data/j2ee/jr/excelModel", "loan_user_open_account_batch.xlsx");
---------------------------------------------------------------------------------
app上通过微信分享,再次分享,文字和图片不再展示
http://blog.csdn.net/yangzhen06061079/article/details/53436463
图片需大于 300 * 300
---------------------------------------------------------------------------------
扫码登录功能实现
---------------------------------------------------------------------------------
代码中引用不到依赖,删除 本地 maven 仓库中的 jar 包,
选中项目,右键 -> maven -- update project -> force update
重新 maven update -->
---------------------------------------------------------------------------------
项目提测:
涉及新增配置文件,将配置文件发送给运维,告知添加的路径
配置 URL = 某机器IP地址 ;
根据当天发版的情况而定,发布哪个环境,选择相应环境的机器
其他环境的机器由运维进行相应的修改;并添加限制,本地提交的代码不会对已上传的配置文件产生影响
----------------------------------------------------------------------------------
mapper
查询count 的 resultType = int , 若为 Integer ,需要对数据为空时进行判断,若为null ,赋值 0
需要对查询结果进行 ifNULL 判断
解决:
返回值定为 int ,不会出现上述的问题
----------------------------------------------------------------------------------
去掉参数打印日志 , 项目中有 基于AOP的切面日志输出
检查接口,无需 @valid ,添加会增加参数的校验过程,耗时
----------------------------------------------------------------------------------
mapper.xml 字段映射到 po ,属性字段 名称 不区分大小写
查询时提示映射找不到
PO中 有一个 Byte isUseable ;一个常量 ISUseable ;sql 中 is_useable as isUseable 所以找不到对应
-----------------------------------------------------------------------------------
js css 引用带有版本号码
map 中 value 为 String
JSON.toJSONString( Integer.parseInt(one_count)+1 )
map.getValue
String str = (String)map.getValue
Exception in thread "main" java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.String
Arrays.asList
List.toArray
-----------------------------------------
--------------------------------------------
// 标签使用;定义在方法上面
@JrdDbReadonly
// 标签定义
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
@Documented
public @interface JrdDbReadonly {
String value() default "readonly";
}
// 标签的注入
DataSourceAspect
// 设置主从库标识
DbContextHolder
DbSelectFilter
写入主库,读取从库
主从库同步,数据库本身完成
何时引入?标签如何起作用
加入两次
@WS层加入标签
调用的Service 层再次加入标签
第一次进入 切换到 从库
第二次进入 切换到 从库
第二次退出 切换到 主库
此时变为读入主库
-------------------------------
读写数据库的切换
事务的控制
-------------------------------
https://www.junrongdai.com/invest/index/138780
标的详情页面,图片未展示
--------------------------------------------------
远程访问数据
远程连接地址
182.92.5.40
JRD-2017-kettle
--------------------------------------------------
genericobjectpool borrowobject
java.util.NoSuchElementException: Could not create a validated object, cause: ValidateObject failed
apache 池化
@aspect的作用
@pointcut类的所有方法
java -version 不能执行
path 中 JAVA_HOME放在首位
不配置JRE的目录
eclipse 打不开
jdk 与 eclipse 版本不一致 64 位与 32 位
查看 jdk 版本 ,查看 eclipse版本
------------------------------------------------
mapper
查询count 的 resultType = int , 若为 Integer ,需要对数据为空时进行判断,若为null ,赋值 0
数据传输到JSP中.;<>等符号被转义成其他格式:
StringEscapeUtils.unescapeHtml4 处理后再传递
explain
exist
mysql 语句优化
1.explain
2.trident
查询数据,字段与本地结果不一致
1.检查是否有缓存
有缓存,缓存时间,清除缓存
2.检查代码是否一致,看测试环境的代码
mapper.xml 查询结果映射
dto/po.java 映射类,是否缺少属性字段
启动报错
class not found
org.springframework.http.converter.json.MappingJacksonHttpMessageConverter
Spring 4 --> 改为 3
http://blog.csdn.net/you23hai45/article/details/50513164
eclispe svn 同步代码异常
eclipse 选中项目 team --> refresh / clean up
<s:if test='#request.operateResult == "1" '>
传递字符串比较失败, 改成 匹配 双引号的 1
JSP function
<s:property value="">
attempted to return null from a method with a primitive return type (double)
mapper中的SQL
select sum(invest) from db
invest 为空 ,异常的意思是返回一个null,并SUM
改正:
select IFNULL(sum(invest),0) FROM DB
No matching bean of type dao found for depende
通常:impl 实现 Service 的接口 ,未添加 @Service 标签
今天的异常是,在 impl 的 class 前添加了 abstract 所以导致了上面的问题
估计是在写方法时,没有写实现,所以系统提示改为 abstract 类,就点击了提示
client error
publish 重新发布工程,在Tomcat 上重新编译
publish:是将你的web程序发布到tomcat服务器上,这样通过浏览器就可以访问你的程序。
clean:是指原先编译到tomcat服务器上的程序,先清除掉,然后再重新编译。
代码变动,需要clean 后才会生效
tomcat 启动 java.lang.ClassNotFoundException
http://blog.csdn.net/lissic_blog/article/details/52125633
http://www.cnblogs.com/zhangcybb/p/4516327.html
Failed to start component [StandardServer[8005]]
右键 --> publish
Error creating bean with name 'userOperationListener': Injection of autowire
接口实现类上,未添加 @Service标签
--------------------------------------------
基础教程
http://blog.csdn.net/chwshuang/article/details/50580718
队列配置:
生产者:
core spring-rabbit.xml
调用队列接口,将数据放入队列
消费者:
core_batch sprint-rabbit.xml
调用接口方法,取出数据处理
exchanger
http://terry0501.iteye.com/blog/2329580
--------------------------------------------
--------------------------------------------
反射
--------------------------------------------
StringUtils.trimToEmpty(val); // 判空,为空返回EMPTY,否则,去掉末尾空格
String userId = StringUtils.EMPTY; // 空
ConvertFactory cf = new DozerConvertFactory(); // 转换
User user1 = cf.convert(user, User.class);
--------------------------------------------
15842 [main] ERROR o.a.s.s.o.a.z.s.NIOServerCnxnFactory - Thread Thread[main,5,main] died
java.lang.IllegalArgumentException: Fields for default already set
at org.apache.storm.topology.OutputFieldsGetter.declareStream(OutputFieldsGetter.java:43) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.topology.OutputFieldsGetter.declare(OutputFieldsGetter.java:34) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.topology.OutputFieldsGetter.declare(OutputFieldsGetter.java:30) ~[storm-core-1.0.1.jar:1.0.1]
at com.jrd.dams.bolt.ActivitySingleCumulateQuotaStatisticsBolt.declareOutputFields(ActivitySingleCumulateQuotaStatisticsBolt.java:76) ~[classes/:?]
at org.apache.storm.topology.TopologyBuilder.getComponentCommon(TopologyBuilder.java:431) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.topology.TopologyBuilder.createTopology(TopologyBuilder.java:119) ~[storm-core-1.0.1.jar:1.0.1]
at com.jrd.dams.topology.ActivitySingleCumulateInvestmentTopology.main(ActivitySingleCumulateInvestmentTopology.java:53) ~[classes/:?]
OutputCollector 的 declareOutputFields 不能设置多个 Fields ?
{ActivitySingleCumulateQuotaStatisticsBolt=com.jrd.dams.bolt.ActivitySingleCumulateQuotaStatisticsBolt@2d3c7941,
ActivityInvestAmountBolt=com.jrd.dams.bolt.ActivityInvestAmountBolt@5d3cb19a,
ActivitySingleCumulateInvestRecordBolt=com.jrd.dams.bolt.ActivitySingleCumulateInvestRecordBolt@3bb02548}
java.lang.IllegalArgumentException: Fields for default already set
at org.apache.storm.topology.OutputFieldsGetter.declareStream(OutputFieldsGetter.java:43) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.topology.OutputFieldsGetter.declare(OutputFieldsGetter.java:34) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.topology.OutputFieldsGetter.declare(OutputFieldsGetter.java:30) ~[storm-core-1.0.1.jar:1.0.1]
at com.jrd.dams.bolt.ActivitySingleCumulateQuotaStatisticsBolt.declareOutputFields(ActivitySingleCumulateQuotaStatisticsBolt.java:81) ~[classes/:?]
at org.apache.storm.topology.TopologyBuilder.getComponentCommon(TopologyBuilder.java:431) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.topology.TopologyBuilder.createTopology(TopologyBuilder.java:119) ~[storm-core-1.0.1.jar:1.0.1]
at com.jrd.dams.topology.ActivitySingleCumulateInvestmentTopology.main(ActivitySingleCumulateInvestmentTopology.java:53) ~[classes/:?]
解决:
1.
tuple 中 使用两个emit
this.collector.emit("");
this.collector.emit("");
@Override
public void declareOutputFields(OutputFieldsDeclarer declarer) {
declarer.declare(new Fields("singleInvestQuotaStatistics");
declarer.declare(new Fields("cumulateInvestQuotaStatistics"));
}
2.
一个bolt可以使用emit(streamId, tuple)把元组分发到多个流,其中参数streamId是一个用来标识流的字符串。然后,你可以在TopologyBuilder决定由哪个流订阅它。
修改:this.collctor.emit("streamId",values) -- 下一个BOLT处理接收数据时通过 tuple.getValuesByFields("streamId")
声明多个Fields 的方式不正确,declarer.declare(new Fields("cumulateInvestQuotaStatistics","singleInvestQuotaStatistics"));
===============================================================================================================================================
28815 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] ERROR c.j.d.d.AutopromoDataSource - hivegetConnection fail!
java.sql.SQLException: Cannot create PoolableConnectionFactory (Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.)
at org.apache.commons.dbcp2.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:2294) ~[commons-dbcp2-2.1.1.jar:2.1.1]
at org.apache.commons.dbcp2.BasicDataSource.createDataSource(BasicDataSource.java:2039) ~[commons-dbcp2-2.1.1.jar:2.1.1]
at org.apache.commons.dbcp2.BasicDataSource.getConnection(BasicDataSource.java:1533) ~[commons-dbcp2-2.1.1.jar:2.1.1]
at com.jrd.dams.dao.AutopromoDataSource.getConnection(AutopromoDataSource.java:57) [classes/:?]
at com.jrd.dams.dao.EventMatchResultDao.selectEventMatchResultLastCreateTime(EventMatchResultDao.java:191) [classes/:?]
at com.jrd.dams.service.ActivitySingleCumulateInvestmentService.selectEventMatchResultLastCreateTime(ActivitySingleCumulateInvestmentService.java:80) [classes/:?]
at com.jrd.dams.spout.ActivitySingleCumulateInvestmentSpout.nextTuple(ActivitySingleCumulateInvestmentSpout.java:81) [classes/:?]
at org.apache.storm.daemon.executor$fn__7885$fn__7900$fn__7931.invoke(executor.clj:645) [storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.util$async_loop$fn__625.invoke(util.clj:484) [storm-core-1.0.1.jar:1.0.1]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_79]
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.7.0_79]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) ~[?:1.7.0_79]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.7.0_79]
at java.lang.reflect.Constructor.newInstance(Constructor.java:526) ~[?:1.7.0_79]
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411) ~[mysql-connector-java-5.1.26.jar:?]
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1121) ~[mysql-connector-java-5.1.26.jar:?]
at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:357) ~[mysql-connector-java-5.1.26.jar:?]
at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2482) ~[mysql-connector-java-5.1.26.jar:?]
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2519) ~[mysql-connector-java-5.1.26.jar:?]
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2304) ~[mysql-connector-java-5.1.26.jar:?]
at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:834) ~[mysql-connector-java-5.1.26.jar:?]
at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:47) ~[mysql-connector-java-5.1.26.jar:?]
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.7.0_79]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) ~[?:1.7.0_79]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.7.0_79]
at java.lang.reflect.Constructor.newInstance(Constructor.java:526) ~[?:1.7.0_79]
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411) ~[mysql-connector-java-5.1.26.jar:?]
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:416) ~[mysql-connector-java-5.1.26.jar:?]
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:346) ~[mysql-connector-java-5.1.26.jar:?]
at org.apache.commons.dbcp2.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:39) ~[commons-dbcp2-2.1.1.jar:2.1.1]
at org.apache.commons.dbcp2.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:256) ~[commons-dbcp2-2.1.1.jar:2.1.1]
at org.apache.commons.dbcp2.BasicDataSource.validateConnectionFactory(BasicDataSource.java:2304) ~[commons-dbcp2-2.1.1.jar:2.1.1]
at org.apache.commons.dbcp2.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:2290) ~[commons-dbcp2-2.1.1.jar:2.1.1]
... 10 more
Caused by: java.net.ConnectException: Connection timed out: connect
at java.net.DualStackPlainSocketImpl.connect0(Native Method) ~[?:1.7.0_79]
at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:79) ~[?:1.7.0_79]
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) ~[?:1.7.0_79]
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) ~[?:1.7.0_79]
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) ~[?:1.7.0_79]
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172) ~[?:1.7.0_79]
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[?:1.7.0_79]
at java.net.Socket.connect(Socket.java:579) ~[?:1.7.0_79]
at java.net.Socket.connect(Socket.java:528) ~[?:1.7.0_79]
at java.net.Socket.<init>(Socket.java:425) ~[?:1.7.0_79]
at java.net.Socket.<init>(Socket.java:241) ~[?:1.7.0_79]
at com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:259) ~[mysql-connector-java-5.1.26.jar:?]
at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:307) ~[mysql-connector-java-5.1.26.jar:?]
at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2482) ~[mysql-connector-java-5.1.26.jar:?]
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2519) ~[mysql-connector-java-5.1.26.jar:?]
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2304) ~[mysql-connector-java-5.1.26.jar:?]
at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:834) ~[mysql-connector-java-5.1.26.jar:?]
at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:47) ~[mysql-connector-java-5.1.26.jar:?]
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.7.0_79]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) ~[?:1.7.0_79]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.7.0_79]
at java.lang.reflect.Constructor.newInstance(Constructor.java:526) ~[?:1.7.0_79]
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411) ~[mysql-connector-java-5.1.26.jar:?]
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:416) ~[mysql-connector-java-5.1.26.jar:?]
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:346) ~[mysql-connector-java-5.1.26.jar:?]
at org.apache.commons.dbcp2.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:39) ~[commons-dbcp2-2.1.1.jar:2.1.1]
at org.apache.commons.dbcp2.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:256) ~[commons-dbcp2-2.1.1.jar:2.1.1]
at org.apache.commons.dbcp2.BasicDataSource.validateConnectionFactory(BasicDataSource.java:2304) ~[commons-dbcp2-2.1.1.jar:2.1.1]
at org.apache.commons.dbcp2.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:2290) ~[commons-dbcp2-2.1.1.jar:2.1.1]
... 10 more
28817 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] INFO c.j.d.d.AutopromoDataSource - Setting up data source.
28817 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] INFO c.j.d.d.AutopromoDataSource - Done.
28817 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] ERROR c.j.d.d.EventMatchResultDao - 查询 event_match_result 表中最大的创建时间--查询异常
28817 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] ERROR c.j.d.d.EventMatchResultDao - null
java.lang.NullPointerException
at com.jrd.dams.dao.EventMatchResultDao.selectEventMatchResultLastCreateTime(EventMatchResultDao.java:192) [classes/:?]
at com.jrd.dams.service.ActivitySingleCumulateInvestmentService.selectEventMatchResultLastCreateTime(ActivitySingleCumulateInvestmentService.java:80) [classes/:?]
at com.jrd.dams.spout.ActivitySingleCumulateInvestmentSpout.nextTuple(ActivitySingleCumulateInvestmentSpout.java:81) [classes/:?]
at org.apache.storm.daemon.executor$fn__7885$fn__7900$fn__7931.invoke(executor.clj:645) [storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.util$async_loop$fn__625.invoke(util.clj:484) [storm-core-1.0.1.jar:1.0.1]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_79]
28818 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] ERROR o.a.s.util - Async loop died!
java.lang.NullPointerException
at java.util.Calendar.setTime(Calendar.java:1106) ~[?:1.7.0_79]
at com.jrd.dams.spout.ActivitySingleCumulateInvestmentSpout.nextTuple(ActivitySingleCumulateInvestmentSpout.java:84) ~[classes/:?]
at org.apache.storm.daemon.executor$fn__7885$fn__7900$fn__7931.invoke(executor.clj:645) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.util$async_loop$fn__625.invoke(util.clj:484) [storm-core-1.0.1.jar:1.0.1]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_79]
28824 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] ERROR o.a.s.d.executor -
java.lang.NullPointerException
at java.util.Calendar.setTime(Calendar.java:1106) ~[?:1.7.0_79]
at com.jrd.dams.spout.ActivitySingleCumulateInvestmentSpout.nextTuple(ActivitySingleCumulateInvestmentSpout.java:84) ~[classes/:?]
at org.apache.storm.daemon.executor$fn__7885$fn__7900$fn__7931.invoke(executor.clj:645) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.util$async_loop$fn__625.invoke(util.clj:484) [storm-core-1.0.1.jar:1.0.1]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_79]
28853 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] ERROR o.a.s.util - Halting process: ("Worker died")
java.lang.RuntimeException: ("Worker died")
at org.apache.storm.util$exit_process_BANG_.doInvoke(util.clj:341) [storm-core-1.0.1.jar:1.0.1]
at clojure.lang.RestFn.invoke(RestFn.java:423) [clojure-1.7.0.jar:?]
at org.apache.storm.daemon.worker$fn__8554$fn__8555.invoke(worker.clj:761) [storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.daemon.executor$mk_executor_data$fn__7773$fn__7774.invoke(executor.clj:271) [storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.util$async_loop$fn__625.invoke(util.clj:494) [storm-core-1.0.1.jar:1.0.1]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_79]
解决:
1.jdbc 的 Connection 获取的URL 地址不正确
2.SQL语句中包含 ;
======================================================================================================================================
7558 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] ERROR o.a.s.util - Async loop died!
java.lang.ClassCastException: java.util.Date cannot be cast to java.sql.Date
at com.jrd.dams.dao.EventMatchResultDao.selectEventMatchResultByCreateTime(EventMatchResultDao.java:81) ~[classes/:?]
at com.jrd.dams.service.ActivitySingleCumulateInvestmentService.selectSingleInvestEventMatchResult(ActivitySingleCumulateInvestmentService.java:54) ~[classes/:?]
at com.jrd.dams.spout.ActivitySingleCumulateInvestmentSpout.dealSingleInvestEventMatchResult(ActivitySingleCumulateInvestmentSpout.java:99) ~[classes/:?]
at com.jrd.dams.spout.ActivitySingleCumulateInvestmentSpout.nextTuple(ActivitySingleCumulateInvestmentSpout.java:88) ~[classes/:?]
at org.apache.storm.daemon.executor$fn__7885$fn__7900$fn__7931.invoke(executor.clj:645) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.util$async_loop$fn__625.invoke(util.clj:484) [storm-core-1.0.1.jar:1.0.1]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_79]
7558 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] ERROR o.a.s.d.executor -
java.lang.ClassCastException: java.util.Date cannot be cast to java.sql.Date
at com.jrd.dams.dao.EventMatchResultDao.selectEventMatchResultByCreateTime(EventMatchResultDao.java:81) ~[classes/:?]
at com.jrd.dams.service.ActivitySingleCumulateInvestmentService.selectSingleInvestEventMatchResult(ActivitySingleCumulateInvestmentService.java:54) ~[classes/:?]
at com.jrd.dams.spout.ActivitySingleCumulateInvestmentSpout.dealSingleInvestEventMatchResult(ActivitySingleCumulateInvestmentSpout.java:99) ~[classes/:?]
at com.jrd.dams.spout.ActivitySingleCumulateInvestmentSpout.nextTuple(ActivitySingleCumulateInvestmentSpout.java:88) ~[classes/:?]
at org.apache.storm.daemon.executor$fn__7885$fn__7900$fn__7931.invoke(executor.clj:645) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.util$async_loop$fn__625.invoke(util.clj:484) [storm-core-1.0.1.jar:1.0.1]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_79]
7574 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] ERROR o.a.s.util - Halting process: ("Worker died")
java.lang.RuntimeException: ("Worker died")
at org.apache.storm.util$exit_process_BANG_.doInvoke(util.clj:341) [storm-core-1.0.1.jar:1.0.1]
at clojure.lang.RestFn.invoke(RestFn.java:423) [clojure-1.7.0.jar:?]
at org.apache.storm.daemon.worker$fn__8554$fn__8555.invoke(worker.clj:761) [storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.daemon.executor$mk_executor_data$fn__7773$fn__7774.invoke(executor.clj:271) [storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.util$async_loop$fn__625.invoke(util.clj:494) [storm-core-1.0.1.jar:1.0.1]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_79]
1.解决
preparedStatement = connection.prepareStatement(SEL_EVENT_MATCH_RESULT_BY_TIME_MATCH_RESULT);
preparedStatement.setByte(1, matchResult);
preparedStatement.setString(2, actionType);
preparedStatement.setDate(3, (java.sql.Date) startDateTime);
preparedStatement.setDate(4, (java.sql.Date) endDateTime);
预执行SQL语句,setDate date类型为java.sql.Date 与 java.util.Date 不属于一个类,故不能强制转换
java.sql.Date startDateTimeSQL = new java.sql.Date(startDateTime.getTime());
做一个桥接转换
======================================================================================================================================
258773 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] INFO c.j.d.d.EventMatchResultDao - select event_match_result params is actionType = invest
258853 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] INFO c.j.d.s.ActivitySingleCumulateInvestmentSpout - cumulateInvestResultList is null , startDateTime = 2017-01-02T23:55:00.000+0800 , endDateTime = 2017-01-03T00:00:00.000+0800
258853 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] INFO c.j.d.s.ActivitySingleCumulateInvestmentSpout - 单笔投资或累计投资的最后一条数据的创建时间为空
258854 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] INFO c.j.d.s.ActivitySingleCumulateInvestmentSpout - 查询非累投数据,查询条件,matchResult=1,actionType=invest,startDateTime=2017-01-02T23:55:00.000+0800,endDateTime=2017-01-03T00:00:00.000+0800,accuInvestFlag=0
258854 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] INFO c.j.d.d.EventMatchResultDao - select event_match_result params is matchResult = 1, actionType = invest
258938 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] INFO c.j.d.s.ActivitySingleCumulateInvestmentSpout - singleInvestResultList is null , startDateTime = 2017-01-02T23:55:00.000+0800 , endDateTime = 2017-01-03T00:00:00.000+0800
258938 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] INFO c.j.d.s.ActivitySingleCumulateInvestmentSpout - 查询非累投数据,查询条件,actionType=invest,startDateTime=2017-01-02T23:55:00.000+0800,endDateTime=2017-01-03T00:00:00.000+0800
258938 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] INFO c.j.d.d.EventMatchResultDao - select event_match_result params is actionType = invest
1.解决
初次查询获取最大创建时间,返回值为Date 类型,而非 DateTime
1、从结果集中取得日期部分
resultSet.getDate(); --2013-01-07
2、从结果集中取得时间部分
resultSet.getTime() --22:08:09
3、从结果集中同时得到日期和时间
resultSet.getTimestamp(); --2013-01-07 23:08:09
2.传参时
prepareStatement.setDate() ; // 始终为Date 类型的数据 2017-01-03 ,改为 setTimeStamp
3.
public static List<EventMatchResult> dealSelectEventMatchResult(ResultSet resultSet) throws SQLException{
List<EventMatchResult> eventMatchResultList = new Vector<EventMatchResult>(100);
while(resultSet.next()){
EventMatchResult eventMatchResult = new EventMatchResult();
eventMatchResult.setBizId(resultSet.getString(1));
eventMatchResult.setUserId(resultSet.getString(2));
eventMatchResult.setMainEventId(resultSet.getInt(3));
eventMatchResult.setEventConditionId(resultSet.getInt(4));
eventMatchResult.setMainEventName(resultSet.getString(5));
eventMatchResult.setEventMatchResultId(resultSet.getInt(6));
eventMatchResult.setAwardsHBids(resultSet.getString(7));
eventMatchResult.setMatchResult(resultSet.getByte(8));
eventMatchResult.setCreateTime(resultSet.getDate(9));
eventMatchResult.setAccInvestFlag(resultSet.getByte(10));
eventMatchResult.setAccInvestAmount(resultSet.getBigDecimal(11));
eventMatchResult.setCouponAmounts(resultSet.getBigDecimal(12));
eventMatchResultList.add(eventMatchResult);
}
return eventMatchResultList ;
}
将resultSet传递到一个方法中时,resultSet.next()始终为FALSE,
解决方式,修改方法,接收最终获取到的结果,进行组合
if(resultSet.next()){
resultSet.previous();
while(resultSet.next()){
activeTypeList.add(resultSet.getString(1));
}
}
解决:
result.next 游标下移一位,判断结束后,应将指针上移一位,否则,while判断时会丢失结果
==========================================================================================================================================
26735 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] ERROR c.j.d.d.HBRelativeDataDao - null
java.sql.SQLFeatureNotSupportedException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.7.0_79]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) ~[?:1.7.0_79]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.7.0_79]
at java.lang.reflect.Constructor.newInstance(Constructor.java:526) ~[?:1.7.0_79]
at java.lang.Class.newInstance(Class.java:379) ~[?:1.7.0_79]
at com.mysql.jdbc.SQLError.notImplemented(SQLError.java:1334) ~[mysql-connector-java-5.1.26.jar:?]
at com.mysql.jdbc.JDBC4Connection.createArrayOf(JDBC4Connection.java:56) ~[mysql-connector-java-5.1.26.jar:?]
at org.apache.commons.dbcp2.DelegatingConnection.createArrayOf(DelegatingConnection.java:844) ~[commons-dbcp2-2.1.1.jar:2.1.1]
at org.apache.commons.dbcp2.DelegatingConnection.createArrayOf(DelegatingConnection.java:844) ~[commons-dbcp2-2.1.1.jar:2.1.1]
at com.jrd.dams.dao.HBRelativeDataDao.selectJRHBPkgDefine(HBRelativeDataDao.java:47) [classes/:?]
at com.jrd.dams.service.ActivitySingleCumulateInvestmentService.selectJRHBPkgDefine(ActivitySingleCumulateInvestmentService.java:145) [classes/:?]
at com.jrd.dams.service.ActivitySingleCumulateInvestmentService.dealEventMatchResultList(ActivitySingleCumulateInvestmentService.java:244) [classes/:?]
at com.jrd.dams.spout.ActivitySingleCumulateInvestmentSpout.dealSingleInvestEventMatchResult(ActivitySingleCumulateInvestmentSpout.java:104) [classes/:?]
at com.jrd.dams.spout.ActivitySingleCumulateInvestmentSpout.nextTuple(ActivitySingleCumulateInvestmentSpout.java:88) [classes/:?]
at org.apache.storm.daemon.executor$fn__7885$fn__7900$fn__7931.invoke(executor.clj:645) [storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.util$async_loop$fn__625.invoke(util.clj:484) [storm-core-1.0.1.jar:1.0.1]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_79]
46057 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] INFO c.j.d.s.ActivitySingleCumulateInvestmentSpout - selectJRHBPkgDefine select result is null , hbids = 231602173,231602298,231602331
46069 [Thread-17-ActivitySingleCumulateInvestmentSpout-executor[3 3]] INFO c.j.d.s.ActivitySingleCumulateInvestmentSpout - 查询非累投数据,查询条件,actionType=invest,startDateTime=2017-01-03T12:39:45.000+0800,endDateTime=2017-01-03T12:44:45.000+0800
SQL Array in(?) 批量传递参数,封装到 array 中,报错异常
不支持此种用法
4807 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] WARN o.a.s.s.o.a.z.s.NIOServerCnxn - caught end of stream exception
org.apache.storm.shade.org.apache.zookeeper.server.ServerCnxn$EndOfStreamException: Unable to read additional data from client sessionid 0x15aa171b46e000e, likely client has closed socket
at org.apache.storm.shade.org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228) [storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.shade.org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208) [storm-core-1.0.1.jar:1.0.1]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_79]
4807 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] INFO o.a.s.s.o.a.z.s.NIOServerCnxn - Closed socket connection for client /127.0.0.1:52159 which had sessionid 0x15aa171b46e000e
116172 [Thread-15-ActivitySingleCumulateInvestRecordBolt-executor[2 2]] ERROR o.a.s.util - Async loop died!
java.lang.RuntimeException: java.lang.NullPointerException
at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:452) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:418) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.daemon.executor$fn__7953$fn__7966$fn__8019.invoke(executor.clj:847) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.util$async_loop$fn__625.invoke(util.clj:484) [storm-core-1.0.1.jar:1.0.1]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_79]
Caused by: java.lang.NullPointerException
at com.jrd.dams.dao.P2PTdInvestmentInvestDao.saveP2pTdInvestmentInvest(P2PTdInvestmentInvestDao.java:192) ~[classes/:?]
at com.jrd.dams.bolt.ActivitySingleCumulateInvestRecordBolt.dealEventMatchResultData(ActivitySingleCumulateInvestRecordBolt.java:95) ~[classes/:?]
at com.jrd.dams.bolt.ActivitySingleCumulateInvestRecordBolt.dealSingleInvestEventMatchResult(ActivitySingleCumulateInvestRecordBolt.java:68) ~[classes/:?]
at com.jrd.dams.bolt.ActivitySingleCumulateInvestRecordBolt.execute(ActivitySingleCumulateInvestRecordBolt.java:57) ~[classes/:?]
at org.apache.storm.daemon.executor$fn__7953$tuple_action_fn__7955.invoke(executor.clj:728) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.daemon.executor$mk_task_receiver$fn__7874.invoke(executor.clj:461) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.disruptor$clojure_handler$reify__7390.onEvent(disruptor.clj:40) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:439) ~[storm-core-1.0.1.jar:1.0.1]
... 6 more
116172 [Thread-15-ActivitySingleCumulateInvestRecordBolt-executor[2 2]] ERROR o.a.s.d.executor -
java.lang.RuntimeException: java.lang.NullPointerException
at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:452) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:418) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.daemon.executor$fn__7953$fn__7966$fn__8019.invoke(executor.clj:847) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.util$async_loop$fn__625.invoke(util.clj:484) [storm-core-1.0.1.jar:1.0.1]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_79]
Caused by: java.lang.NullPointerException
at com.jrd.dams.dao.P2PTdInvestmentInvestDao.saveP2pTdInvestmentInvest(P2PTdInvestmentInvestDao.java:192) ~[classes/:?]
at com.jrd.dams.bolt.ActivitySingleCumulateInvestRecordBolt.dealEventMatchResultData(ActivitySingleCumulateInvestRecordBolt.java:95) ~[classes/:?]
at com.jrd.dams.bolt.ActivitySingleCumulateInvestRecordBolt.dealSingleInvestEventMatchResult(ActivitySingleCumulateInvestRecordBolt.java:68) ~[classes/:?]
at com.jrd.dams.bolt.ActivitySingleCumulateInvestRecordBolt.execute(ActivitySingleCumulateInvestRecordBolt.java:57) ~[classes/:?]
at org.apache.storm.daemon.executor$fn__7953$tuple_action_fn__7955.invoke(executor.clj:728) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.daemon.executor$mk_task_receiver$fn__7874.invoke(executor.clj:461) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.disruptor$clojure_handler$reify__7390.onEvent(disruptor.clj:40) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:439) ~[storm-core-1.0.1.jar:1.0.1]
... 6 more
116207 [Thread-15-ActivitySingleCumulateInvestRecordBolt-executor[2 2]] ERROR o.a.s.util - Halting process: ("Worker died")
java.lang.RuntimeException: ("Worker died")
at org.apache.storm.util$exit_process_BANG_.doInvoke(util.clj:341) [storm-core-1.0.1.jar:1.0.1]
at clojure.lang.RestFn.invoke(RestFn.java:423) [clojure-1.7.0.jar:?]
at org.apache.storm.daemon.worker$fn__8554$fn__8555.invoke(worker.clj:761) [storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.daemon.executor$mk_executor_data$fn__7773$fn__7774.invoke(executor.clj:271) [storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.util$async_loop$fn__625.invoke(util.clj:494) [storm-core-1.0.1.jar:1.0.1]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_79]
-----------------------------------------------------
304134 [ConsumerFetcherThread-dams-pc-group_DA-20161216IVSI-1489474729051-1e4f2fa1-0-2] INFO k.c.SimpleConsumer - Reconnect due to error:
java.nio.channels.ClosedChannelException
at sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:219) ~[?:1.7.0_79]
at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103) ~[?:1.7.0_79]
at java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385) ~[?:1.7.0_79]
at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:81) ~[kafka-clients-0.10.0.0.jar:?]
at kafka.network.BlockingChannel.readCompletely(BlockingChannel.scala:129) ~[kafka_2.11-0.10.0.0.jar:?]
at kafka.network.BlockingChannel.receive(BlockingChannel.scala:120) ~[kafka_2.11-0.10.0.0.jar:?]
at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:86) [kafka_2.11-0.10.0.0.jar:?]
at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:83) [kafka_2.11-0.10.0.0.jar:?]
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:132) [kafka_2.11-0.10.0.0.jar:?]
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:132) [kafka_2.11-0.10.0.0.jar:?]
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:132) [kafka_2.11-0.10.0.0.jar:?]
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) [kafka_2.11-0.10.0.0.jar:?]
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:131) [kafka_2.11-0.10.0.0.jar:?]
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:131) [kafka_2.11-0.10.0.0.jar:?]
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:131) [kafka_2.11-0.10.0.0.jar:?]
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) [kafka_2.11-0.10.0.0.jar:?]
at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:130) [kafka_2.11-0.10.0.0.jar:?]
at kafka.consumer.ConsumerFetcherThread.fetch(ConsumerFetcherThread.scala:108) [kafka_2.11-0.10.0.0.jar:?]
at kafka.consumer.ConsumerFetcherThread.fetch(ConsumerFetcherThread.scala:29) [kafka_2.11-0.10.0.0.jar:?]
at kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:107) [kafka_2.11-0.10.0.0.jar:?]
at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:98) [kafka_2.11-0.10.0.0.jar:?]
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63) [kafka_2.11-0.10.0.0.jar:?]
304135 [ConsumerFetcherThread-dams-pc-group_DA-20161216IVSI-1489474729051-1e4f2fa1-0-2] INFO k.c.ConsumerFetcherThread - [ConsumerFetcherThread-dams-pc-group_DA-20161216IVSI-1489474729051-1e4f2fa1-0-2], Stopped
304135 [dams-pc-group_DA-20161216IVSI-1489474729051-1e4f2fa1_watcher_executor] INFO k.c.ConsumerFetcherThread - [ConsumerFetcherThread-dams-pc-group_DA-20161216IVSI-1489474729051-1e4f2fa1-0-2], Shutdown completed
304136 [dams-pc-group_DA-20161216IVSI-1489474729051-1e4f2fa1_watcher_executor] INFO k.c.ConsumerFetcherManager - [ConsumerFetcherManager-1489474729145] All connections stopped
304137 [dams-pc-group_DA-20161216IVSI-1489474729051-1e4f2fa1_watcher_executor] INFO k.c.ZookeeperConsumerConnector - [dams-pc-group_DA-20161216IVSI-1489474729051-1e4f2fa1], Cleared all relevant queues for this fetcher
304137 [dams-pc-group_DA-20161216IVSI-1489474729051-1e4f2fa1_watcher_executor] INFO k.c.ZookeeperConsumerConnector - [dams-pc-group_DA-20161216IVSI-1489474729051-1e4f2fa1], Cleared the data chunks in all the consumer message iterators
304137 [dams-pc-group_DA-20161216IVSI-1489474729051-1e4f2fa1_watcher_executor] INFO k.c.ZookeeperConsumerConnector - [dams-pc-group_DA-20161216IVSI-1489474729051-1e4f2fa1], Committing all offsets after clearing the fetcher queues
304137 [dams-pc-group_DA-20161216IVSI-1489474729051-1e4f2fa1_watcher_executor] INFO k.c.ZookeeperConsumerConnector - [dams-pc-group_DA-20161216IVSI-1489474729051-1e4f2fa1], Releasing partition ownership
304205 [dams-pc-group_DA-20161216IVSI-1489474729051-1e4f2fa1_watcher_executor] INFO k.c.RangeAssignor - Consumer dams-pc-group_DA-20161216IVSI-1489474729051-1e4f2fa1 rebalancing the following partitions: ArrayBuffer(0, 1, 2) for topic pc-topic with consumers: List(dams-pc-group_DA-20161216IVSI-1489474729051-1e4f2fa1-0)
客户端地址配置不正确
Caused by: java.lang.ClassNotFoundException: com.jrd.framework.log.LoggerFactory
at java.net.URLClassLoader$1.run(URLClassLoader.java:366) ~[?:1.7.0_79]
at java.net.URLClassLoader$1.run(URLClassLoader.java:355) ~[?:1.7.0_79]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.7.0_79]
at java.net.URLClassLoader.findClass(URLClassLoader.java:354) ~[?:1.7.0_79]
at java.lang.ClassLoader.loadClass(ClassLoader.java:425) ~[?:1.7.0_79]
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) ~[?:1.7.0_79]
at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ~[?:1.7.0_79]
... 14 more
未引入该JAR包
48767 [Thread-15-ActivityVisitPersonCountBolt-executor[2 2]] ERROR c.j.f.c.CacheClient - Failure! Set failed! key:jrd_re_71,value:0,expertime:-144425
redis.clients.jedis.exceptions.JedisDataException: ERR invalid expire time in setex
at redis.clients.jedis.Protocol.processError(Protocol.java:117) ~[jedis-2.8.1.jar:?]
at redis.clients.jedis.Protocol.process(Protocol.java:151) ~[jedis-2.8.1.jar:?]
at redis.clients.jedis.Protocol.read(Protocol.java:205) ~[jedis-2.8.1.jar:?]
at redis.clients.jedis.Connection.readProtocolWithCheckingBroken(Connection.java:297) ~[jedis-2.8.1.jar:?]
at redis.clients.jedis.Connection.getStatusCodeReply(Connection.java:196) ~[jedis-2.8.1.jar:?]
at redis.clients.jedis.Jedis.setex(Jedis.java:387) ~[jedis-2.8.1.jar:?]
at com.jrd.framework.cache.ShardRedisClient.setKV(ShardRedisClient.java:185) ~[jr-cache-1.2-SNAPSHOT.jar:?]
at com.jrd.framework.cache.CacheClient.set(CacheClient.java:49) [jr-cache-1.2-SNAPSHOT.jar:?]
at com.jrd.framework.cache.CacheUtils.set(CacheUtils.java:650) [jr-cache-1.2-SNAPSHOT.jar:?]
at com.jrd.dams.bolt.ActivityVisitPersonCountBolt.dealActivityVisitPersonCount(ActivityVisitPersonCountBolt.java:155) [classes/:?]
at com.jrd.dams.bolt.ActivityVisitPersonCountBolt.dealActivityVisitPersonCount(ActivityVisitPersonCountBolt.java:133) [classes/:?]
at com.jrd.dams.bolt.ActivityVisitPersonCountBolt.execute(ActivityVisitPersonCountBolt.java:104) [classes/:?]
at org.apache.storm.daemon.executor$fn__7953$tuple_action_fn__7955.invoke(executor.clj:728) [storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.daemon.executor$mk_task_receiver$fn__7874.invoke(executor.clj:461) [storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.disruptor$clojure_handler$reify__7390.onEvent(disruptor.clj:40) [storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:439) [storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:418) [storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.daemon.executor$fn__7953$fn__7966$fn__8019.invoke(executor.clj:847) [storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.util$async_loop$fn__625.invoke(util.clj:484) [storm-core-1.0.1.jar:1.0.1]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_79]
设置不合法的失效时间
StringBuffer sb = new StringBuffer(activityId);
sb.append("-").append(userId);
StringBuffer(参数为Integer类型)
.toString()时,activityId 未能输出
处理:需要将activityId转化为字符串
原因:
不同参数的构造函数
public StringBuffer(int capacity) {
super(capacity);
}
public StringBuffer(String str) {
super(str.length() + 16);
append(str);
}
/**
* 放入Google Guava 定义的缓存中
* @param key
* @param value
*/
public void putGoogleGuavaCache(String key , Object value){
GoogleGuavaCacheUtil.invalidTime = GOOGLE_GUAVA_INVALID_TIME_NUM_DEFAULT ;
GoogleGuavaCacheUtil.timeUnit = GOOGLE_GUAVA_INVALID_TIME_TIME_UNIT_DEFAULT ;
}
读取Kafka中的数据失败,
private transient KafkaStream<byte[], byte[]> kafkaStream ;
原因在于 transient 关键字,不适用序列化,而kafka数据的传输即通过序列化实现(暂时这样理解)
去掉这个关键字 ,程序正常
写入到kafka的数据将写到磁盘(写入时需要序列化)
kafka
topic 与 group
两个消费模式,队列与订阅
队列:单一消费
订阅:广播方式发送,每个消费者维护一个读取位置;
消息维护时间为设置的失效时间,到失效时间无论是否读取,均丢失
若消费者同组,为队列,否则,订阅
Spout
多个Spout向同一个Bolt发送数据
需设置多个Fields在Spout中
Bolt需通过不同的FieldsName进行接收
优化:
添加流名称,Spout中设置相同的FieldsName
修改两处
1.Spout中
declare.declareFields(new Fields("name"))
改为
declarer.declareStream(ACTIVITY_VISIT_PERSON_COUNT_WEB_STREAM_ID,new Fields("userId"));
2.Topology中
Bolt接收数据
builder.setBolt("ActivitySingleCumulateInvestRecordBolt", new ActivitySingleCumulateInvestRecordBolt(), parallelismSpout).shuffleGrouping("ActivitySingleCumulateInvestmentSpout");
改为
builder.setBolt("ActivityInvestAmountBolt", new ActivityInvestAmountBolt(),parallelismSpout)
.shuffleGrouping("ActivitySingleCumulateInvestRecordBolt",ActivitySingleCumulateInvestRecordBolt.MAIN_EVENT_ID_STREAM)
.shuffleGrouping("ActiveDataStatisticsStoringBDBolt",ActiveDataStatisticsStoringBDBolt.MAIN_EVENT_ID_STREAM)
.shuffleGrouping("ActiveDataStatisticsStoringHbBolt",ActiveDataStatisticsStoringHbBolt.MAIN_EVENT_ID_STREAM);
* 1.接收KAFAK中ANDROID与IOS端的数据
* topic 数据标签
* group 与AppDataResolvingAndStoringSpout 不同
* 不同组,为订阅者模式
* 相同组,为队列,单一消费者,会与AppDataResolvingAndStoringSpout消费者同时消费,
* 会出现读取数据为空的情况
* 2.发送数据
* streamId : ActivityVisitPersonCountAppSpout-StreamId
* fields : userId
* 注:
* fields 不允许重复;但Bolt接收数据时为方便按照FieldsName进行接收,需两个Spout中设置同名的Fields
* 通过不同的streamId区分Spout中同名的Fields
----------------------------------------------------------------
重启前,在stormUi 上将该Spout kill 掉,等待完成后
maven打包,右键 maven -- build clean run
cd jar 放的位置 /data/storm/jar
rz 上传打包文件
更改名称 将当前的上传文件更名为 storm.jar
mv storm... storm.jar
启动命令 /usr/local/apache-storm-1.0.1/bin/storm jar /data/storm/jar/storm.jar com.jrd.dams.topology.ActivityEffectAnalysisTopology ActivityEffectAnalysisTopologyTest
----------------------------------------------------------------
https://www.junrongdai.com/invest/index/111497
index/(\d)+
https://m.junrongdai.com/#path=views/project/details/?pid=106548
(\?pid=)?(\d)+
[(index/)|(\?pid=)?](\d)+
缓存的使用,只是减少数据访问的压力,考虑缓存失效的情况--缓存失效,查询数据表中数据
减少数据查询次数,将查询接口改写到有必要进行查询的地方
try catch 即使代码未显示的抛出异常,仍要捕获,避免因为意想不到的异常导致程序结束
----------------------------------------------------------------
more +/str work.log
nf 向下移动n屏,其中n是数字。
nb 向上移动n屏,其中n是数字。
/模式 向下查找指定的字符串模式。
n 重复前面查找命令
q 退出
空格 下一屏
ENTER 下一行
查看异常,
more +/ERROR work.log
查找首次出现异常的位置
按n查找后续的异常位置
1.log日志定位
常用 tail -f -n 100 xxx.log ,只能查看最后的日志,不方便查找指定问题
如:巡检,则只关注是否有ERROR的日志出现;或已知出现问题,想定位某个时间点的日志,而不是全部查看
异常信息查看或根据指定字符定位Log
more +/ERROR xxx.log
ENTER 下一行
空格 下一屏
nf 向下移动n屏,其中n是数字。
nb 向上移动n屏,其中n是数字。
n 重复前面查找命令,即查找下一个ERROR的位置
q 退出more 模式
如欲查看 15:30 的日志
more +/15:30 xxx.log
可能匹配到 分钟和秒,按n 查找下一个匹配
2.编辑
vim 编辑
i insert 进入写模式
ESC 退出
: 进入命令行
wq 退出保存
-------------------------------------------------------------
数据操作
JDBC
DButil
Mybatis
优点,有何区别
----------------------------------------------------------
“无论你遇见谁,他都是你生命该出现的人,绝非偶然,他一定会教会你一些什么”。
所以我也相信:“无论我走到哪里,那都是我该去的地方,经历一些我该经历的事,遇见我该遇见的人”
昨天,是一道风景,看见了,模糊了;
时间是一个过客,记住了,遗忘了;生活是一个漏斗,得到了,失去了;世上没有不平的事,只有不平的心。不去怨,不去恨,淡然一切,往事如烟。
人生就是一阵风,起了,没了。
理想就是一盏灯,燃了,灭了。人情就是一阵雨,下了,干了。朋友就是一层云,聚了,散了。闲愁就是一壶酒,醉了,醒了。
寂寞就是一颗星,闪了,灭了。孤独就是一轮月,升了,落了。死亡就是一场梦,累了,睡了。
------------------------------------------
一、Excel 文件上传
0.0 功能实现
参考内容:http://blog.csdn.net/u013871100/article/details/52901996
1.0问题描述
测试过程中,选择一个 Excel 文件上传,版本为 office 2007
1.2异常信息
Exception in thread "main" org.apache.poi.poifs.filesystem.OfficeXmlFileException: The supplied data appears to be in the Office 2007+ XML. You are calling the part of POI that deals with OLE2 Office Documents. You need to call a different part of POI to process this data (eg XSSF instead of HSSF)
at org.apache.poi.poifs.storage.HeaderBlock.<init>(HeaderBlock.java:128)
at org.apache.poi.poifs.storage.HeaderBlock.<init>(HeaderBlock.java:112)
at org.apache.poi.poifs.filesystem.NPOIFSFileSystem.<init>(NPOIFSFileSystem.java:302)
at org.apache.poi.poifs.filesystem.POIFSFileSystem.<init>(POIFSFileSystem.java:86)
1.3异常解释
代码中用HSSFSheet 进行表单获取,以及使用HSSFXXX对行、列内容进行处理
而HSSFSheet 导入时只支持2007版本而不支持其他版本
1.4问题解决
参考内容:http://blog.csdn.net/mmm333zzz/article/details/7962377
HSSFSheet 系列进行文件的处理,只能处理2007版本以下的
XSSFSheet 系列进行文件的处理,处理2007版本
2.0问题描述
测试过程中,获取的数据总数与已经总数不一致,即数据丢失
2.1问题解决
参考内容:http://blog.csdn.net/u013871100/article/details/52901996
row / cell 取值从0 开始
3.0问题描述
上传文件中含有手机号码,对手机号码进行格式校验时异常,检查正则无误,跟踪代码发现接收的数据变成科学计数法
3.1问题解决
参考内容:http://blog.csdn.net/cclovett/article/details/16343615
使用BigDecimal 对数据进行转化
参考内容:http://jingyan.baidu.com/article/0964eca27a39808285f5363c.html
将模板中手机号码的单元格格式,设置为 数字--文本
4.0问题描述
读取文本内容时,出现异常
4.1异常描述
java.lang.IllegalStateException: Cannot get a text value from a numeric cell
at org.apache.poi.xssf.usermodel.XSSFCell.typeMismatch(XSSFCell.java:994) ~[poi-ooxml-3.14.jar:3.14]
at org.apache.poi.xssf.usermodel.XSSFCell.getRichStringCellValue(XSSFCell.java:399) ~[poi-ooxml-3.14.jar:3.14]
at org.apache.poi.xssf.usermodel.XSSFCell.getStringCellValue(XSSFCell.java:351) ~[poi-ooxml-3.14.jar:3.14]
4.2异常解释
试图将Number 转换为String
4.3异常解决
参考内容:http://blog.csdn.net/ysughw/article/details/9288307
进行格式转化
5.0问题描述
取出Map的Values集合,并当做查询参数时异常
5.1异常描述
java.util.HashMap$Values cannot be cast to java.util.Set
5.2异常解决
遍历Map,组装Value
二、文件下载
0.0功能实现
参考内容:
http://meigesir.iteye.com/blog/1539358
http://www.cnblogs.com/ungshow/archive/2009/01/12/1374491.html
1.0问题描述
无法映射访问路径
1.1代码内容
@RequestMapping("excelModelDown")
public void excelModelDown(
// @RequestParam(defaultValue="",required=false)
// 添加此行注释后,请求正常映射
HttpServletResponse response
){
return ;
}
// 在实现中重新定义response 依然无法解决问题
// 取消上述@RequestParam标签即可
2.0问题描述
无法获取放到src/main/resources下的文件
2.1问题解决
resources中的文件打包时也被放在classes目录下。使用this.getClass().getClassLoader().getResource("");
读取src/main/resources 下的文件
获取文件路径 this指代当前类
3.0问题描述
放在SRC的文件为英文名称,下载的文件欲换成中文名称
但更换名称后,下载的文件变为未知文件
注:
response.addHeader("Content-Disposition", "attachment;filename=" + new String(EXCEL_MODEL_CHINESE_NAME.getBytes("gb2312"), "ISO8859-1" )+"."+ext.toLowerCase());
若文件名称不变,则设置"fileName = "+new String(currentFileName.getBytes())
若文件名称变更,setFileName 时,给出 全限定名称,文件名+.+文件类型
下载文件名称乱码问题解决
http://lj830723.iteye.com/blog/1415479
----------------------------------------------------
日拱一卒功不唐捐
行到水穷处坐看云起时
-----------------------------------------------------
http://172.16.204.118:8080/jr-dams-admin/quotaDefine/operationQuotaExcelModelDown
http://172.16.204.118:8080/jr-dams-admin/quotaDefine/operationQuotaExcelUpload
{"resultCode":10009,"resultMsg":"上传Excel文件失败 文件扩展名不正确,请确认文件是Excel类型文件,扩展名为.xlsx","data":null,"success":false}
异常信息格式,有异常则提示
1.测试导入异常情况
代码重构与格式化
form 表单提交
<bean id ="multipartResolver" class="org.springframework.web.multipart.commons.CommonsMultipartResolver">
<!--设置编码格式 默认是ISO-8859-1-->
<property name="defaultEncoding" value="utf-8"></property>
<!--设置上传文件的最大字节数-->
<property name="maxUploadSize" value="10485760000"></property>
<!--设置在写入磁盘前内存中可存储的最大字节数 默认是-->
<property name="maxInMemorySize" value="40960"></property>
</bean>
@RequestParam (value="file") MultipartFile file
2.日志操作相关命令
3.安装虚拟机
4.Linux 基础
三、上传
MultipartFile 使用
http://blog.csdn.net/swingpyzf/article/details/20230865
Caused by: java.lang.IllegalArgumentException: Expected MultipartHttpServletRequest: is a MultipartResolver configured?
at org.springframework.util.Assert.notNull(Assert.java:112) ~[spring-core-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.web.method.annotation.RequestParamMethodArgumentResolver.resolveName(RequestParamMethodArgumentResolver.java:168) ~[spring-web-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.web.method.annotation.AbstractNamedValueMethodArgumentResolver.resolveArgument(AbstractNamedValueMethodArgumentResolver.java:88) ~[spring-web-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.web.method.support.HandlerMethodArgumentResolverComposite.resolveArgument(HandlerMethodArgumentResolverComposite.java:78) ~[spring-web-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.web.method.support.InvocableHandlerMethod.getMethodArgumentValues(InvocableHandlerMethod.java:162) ~[spring-web-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:129) ~[spring-web-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:110) ~[spring-webmvc-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:775) ~[spring-webmvc-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:705) ~[spring-webmvc-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85) ~[spring-webmvc-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:959) ~[spring-webmvc-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:893) ~[spring-webmvc-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:965) ~[spring-webmvc-4.1.9.RELEASE.jar:4.1.9.RELEASE]
... 33 common frames omitted
http://blog.csdn.net/jiangyu1013/article/details/60758582
Expected MultipartHttpServletRequest: is a MultipartResolver configured
http://blog.csdn.net/jiangyu1013/article/details/60758582
MultipartFile 接收上传结果为空
<bean id="multipartResolver" class="org.springframework.web.multipart.commons.CommonsMultipartResolver">
<!-- 请求编码格式 -->
<property name="defaultEncoding" value="utf-8"></property>
<!-- 上传文件大小(单位:字节) -->
<property name="maxUploadSize" value="50000000"></property>
<!-- 缓冲区大小(单位:KB) -->
<property name="maxInMemorySize" value="1024"></property>
</bean>
<context:component-scan base-package="com.jrd.dams.admin">
<context:exclude-filter type="regex"
expression="com.jrd.dams.admin.controller.*" />
</context:component-scan>
放在 scan 上,报错,数据库连接异常
appears to have started a thread named [Abandoned connection cleanup thread] but has failed to stop
更换位置后正常
http://www.cnblogs.com/songyunxinQQ529616136/p/6646070.html
接收文件名称与 HTML中上传文件名称相同
// 包含文件名称及扩展名
String fileName = excelFile.getOriginalFilename();
// 只包含文件名,与File 的getFileName方法有区别
String fileName = excelFile.getFileName();
wb = new XSSFWorkbook(excelFile.toString());
XSSFWorkbook 加载 MultipartFile 时异常
http://jingyan.baidu.com/article/11c17a2c073e12f446e39d38.html
wb = new XSSFWorkbook(excelFile.getInputStream());
------------------------------------------------------------------
zeromq
---------------------------------
java.lang.ClassNotFoundException: org.springframework.web.context.ContextLoaderListener
eclipse 加载项目时,项目未带有Spring jar包
打开tomcat 目录下的 webapp 目录,查看此工程 的WEB-INF 是否有lib文件夹
无,所依加载时会找不到JAR包
解决方案:http://blog.csdn.net/tfy1332/article/details/46047473
----------------------------------
httpClient 请求 400
参数用URLencoder 进行处理 URLencoeder.encode("","UTF-8");
----------------------------------
连接池异常
http://blog.csdn.net/wo8553456/article/details/40396401
--------------------------------