[size=large][/size]最近在学习lucene搜索开源库,看一些书,比如《lucene in action》就是比较注重实战的,书里面基本将lucene涉及的操作基本涵盖,但是有些地方却只是点了一下,个人水平有限,而且只是写了一个小项目,有些感受:
1. 语汇单元到二进制文件的算法,其中关系到下面的索引结构
2.索引文件结构,在抽象概念上单词词典和倒排例表,具体到文件,就是叫做倒排索引,至于前面两个概念,单词词典:将field中term(最小基本单元)存储,一般是由单词,频率,指向包含单词文档的指针(指向到排列表中与term有关的doc位置),一般是以B+树形式;倒排列表则是存储DOCID,逆文档频率,具体单词在文档中的位置信息,一般形式(三元组)
单词(你好,2,*p)--->(文档1,5,(2,4,7,8,13)->(文档2,3(5,9,12))
单词(第,2,*p)--->(文档1,,5,(1,6,9,15)->(文档2,3(51,81,121))
单词(分布筋,2,*p)--->(文档1,5,(3,14,35,36,37)->(文档2,3(52,82,122))
{单词你好出现在文档1,2,对应频率5,3,出现位置分别为2,4,7,8,13;5,,9,12}
3.indexwriter到tokenstream的构建思路,tokenstream中increamentoken方法和analyzer形成语汇单元时的调用(我看了一下源码,在高级的类里面,并没有看到analyzer.tokenstream方法,形成语汇单元时调用increamentoken来语汇单元进行处理,但是在底层接口和抽象积累里面有大量attribute属性,估计通过映射获取,直接获取处理),这一方面涉及比较多算法和数据结构,而且就算根据反射获取,也不是很清楚算法实现过程
4.查询过程中只讲了查询器,过滤器使用和构建简单的,但是本质上是一样的,看了那个过滤器与查询器相互转换的转换器,constantScoreQuery,QuerywrapperFilter,是类似于filter与query之间的转换,有点强制转换的意味,我没有理解深 另外对于其分析查询过程,基本没有,其它书也没怎么讲,但是在搜索引擎里面,就比较多,查询短语,lucene,phrasequery,但是实现上有一些方式:单独建立短语索引,扩展倒排列表,双词查询
5.评分机制,直说payload,直接切入二进制,干扰评分的,通过payloadhelper,这个可对关键词进行一定的加权
6.实际用于比较多的扩张功能:热备份,高亮显示,使用数据库引擎berkely,公取索引,提示列表
总结一下我们学习lucene初步知识:
各种查询器混合使用,过滤器适合的时候使用;索引合并以及热备份;与数据库引擎的联合使用:比如berkely;对文本的解析成文本文件,可以使用tika,如果不相适应解析器,可以使用pdfbox类似的中间解析器来使用,简单一些但是就繁琐了一定;对于搜索方面的人性化定制,高亮显示以及提示,根据用户词语获取用户可能希望的词汇
以及相关搜索方面的知识:
下面关于kucene使用的一个class,以上关于lucene涉及的操作基本都有
package search; import java.io.BufferedInputStream; import java.io.BufferedOutputStream; import java.io.File; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.FileWriter; import java.io.IOException; import java.io.StringReader; import java.text.SimpleDateFormat; import java.util.Calendar; import java.util.Collection; import java.util.Date; import java.util.Iterator; import java.util.concurrent.CopyOnWriteArrayList; import org.apache.lucene.analysis.SimpleAnalyzer; import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.analysis.cn.smart.SmartChineseAnalyzer; import org.apache.lucene.document.Document; import org.apache.lucene.document.Field; import org.apache.lucene.document.FieldSelector; import org.apache.lucene.document.MapFieldSelector; import org.apache.lucene.index.CorruptIndexException; import org.apache.lucene.index.FieldSortedTermVectorMapper; import org.apache.lucene.index.IndexCommit; import org.apache.lucene.index.IndexDeletionPolicy; import org.apache.lucene.index.IndexReader; import org.apache.lucene.index.IndexWriter; import org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy; import org.apache.lucene.index.SnapshotDeletionPolicy; import org.apache.lucene.index.Term; import org.apache.lucene.queryParser.ParseException; import org.apache.lucene.queryParser.QueryParser; import org.apache.lucene.search.BooleanClause.Occur; import org.apache.lucene.search.highlight.Highlighter; import org.apache.lucene.search.highlight.InvalidTokenOffsetsException; import org.apache.lucene.search.highlight.QueryScorer; import org.apache.lucene.search.highlight.SimpleSpanFragmenter; import org.apache.lucene.search.spell.LuceneDictionary; import org.apache.lucene.search.spell.SpellChecker; import org.apache.lucene.search.BooleanQuery; import org.apache.lucene.search.Collector; import org.apache.lucene.search.FieldCache; import org.apache.lucene.search.FuzzyQuery; import org.apache.lucene.search.IndexSearcher; import org.apache.lucene.search.MatchAllDocsQuery; import org.apache.lucene.search.PhraseQuery; import org.apache.lucene.search.Query; import org.apache.lucene.search.ScoreDoc; import org.apache.lucene.search.Sort; import org.apache.lucene.search.SortField; import org.apache.lucene.search.TermQuery; import org.apache.lucene.search.TermRangeQuery; import org.apache.lucene.search.TimeLimitingCollector; import org.apache.lucene.search.TopDocs; import org.apache.lucene.search.TopScoreDocCollector; import org.apache.lucene.search.WildcardQuery; import org.apache.lucene.store.Directory; import org.apache.lucene.store.FSDirectory; import org.apache.lucene.store.LockObtainFailedException; import org.apache.lucene.store.RAMDirectory; import org.apache.lucene.util.Version; //修改分析器standardanalyzer为smartchineseAnalyzer public class Ramwriter implements Runnable { static RAMDirectory ram = null; static IndexWriter writer = null; static Document doc = null; static IndexWriter logwriter = null; static Directory dir = null; static IndexReader reader = null; static IndexSearcher searcher = null; static IndexSearcher sou_searcher = null; static int sum = 0; static Directory sou_dir = null; static TermQuery query = null; static Term term = null; static BufferedInputStream inBuff = null; static BufferedOutputStream outBuff = null; static boolean backbool; static File filelog = null; static FileWriter w = null; String path = this.getClass().getResource("/").getPath();// 路径 public boolean init() throws CorruptIndexException, LockObtainFailedException,// 初始化一次写入的writer IOException { boolean bool = true; ram = new RAMDirectory(); writer = new IndexWriter(ram, new SmartChineseAnalyzer(Version.LUCENE_30), true, IndexWriter.MaxFieldLength.UNLIMITED); return bool; } public void getfilewriter() throws IOException { filelog = new File(path + "indexdir/log.txt"); w = new FileWriter(filelog, true); } public void closefile() throws IOException { if (w == null) { } else { w.close(); } } public boolean ram_close(Directory directory) throws CorruptIndexException, IOException { boolean bool = true; directory.close(); return bool; } public boolean writer_close(IndexWriter indexwriter) throws CorruptIndexException, IOException { boolean bool = true; indexwriter.close(); return bool; } public synchronized boolean add_doc(String type, String theme, String sender, String information)// 同步写入doc throws CorruptIndexException, IOException { // 记录执行情况信息 /* * File filelog = new File(path+"indexdir/log.txt"); FileWriter w = new * FileWriter(filelog, true); */ if (w == null) { getfilewriter(); } else { // w.write("writer create ok"); } // w.write(path); boolean bool = true; try { if (ram == null) { init(); w.write(" not exits ram:\n" + "wrter restart" + (sum + 1) + "ok\n"); // System.out.println("wrter重新启动"+(sum + 1) + "ok"); } else { // w.write((sum + 1) + "ok"); } Field field = null; try { sum++; doc = new Document(); synchronized (this) { doc.add(new Field("id", String.valueOf(sum), Field.Store.YES, Field.Index.NOT_ANALYZED)); doc.add(new Field("type", type, Field.Store.YES, Field.Index.NOT_ANALYZED)); doc.add(new Field("sender", sender, Field.Store.YES, Field.Index.ANALYZED)); field = new Field("theme", theme, Field.Store.YES, Field.Index.ANALYZED); field.setBoost(1.2f); doc.add(field); doc.add(new Field("information", information, Field.Store.YES, Field.Index.ANALYZED)); doc.add(new Field("date", (new SimpleDateFormat( "y/MM/dd HH:mm:ss")).format(Calendar.getInstance() .getTime()), Field.Store.YES, Field.Index.NOT_ANALYZED)); writer.addDocument(doc); // 轮渡,写入硬盘 if ((sum % 100) == 0) { w.write((sum) + " document crate and write to hd\n"); logrotate(type); } } } catch (Exception e) { bool = false; w.write("write error:" + e.toString()+"\n"); } } catch (Exception e) { w.write("adddoc:" + e.toString() + "\n"); } finally { w.flush(); // w.close(); } if (sum > 10000000) { sum = 0; } return bool; } public synchronized boolean logrotate(String type) throws IOException {// 轮渡每100次,将RAM并入段操作,备份,并创建提示索引 // 记录执行信息 // File files = new File("indexdir/log.txt"); // String // path=this.getClass().getClassLoader().getResource("/").getPath();//路径 // File filelog = new File(path+"indexdir/log.txt"); // FileWriter w = new FileWriter(filelog, true); if (w == null) { getfilewriter(); } else { // w.write("writer create ok"); } boolean bool = true; boolean bool_file = false; try { // writer.optimize(); if (reader != null) { reader.close();// 关闭进实时读取的reader,关闭缓存 } else { } writer.close();// 合并之前关闭writer System.out.println("共计" + sum);// 累计文档数 File file = new File(path + "indexdir/" + type);// 动态创建索引文件夹 // File file_copy = new File(path+"indexdir/" + type+"copy");// // 动态创建备份索引文件夹 if (file.exists() == false) { file.mkdir(); // file_copy.mkdir(); System.out.println(file.getName() + ":" + file.getAbsolutePath()); bool_file = true; } else { System.out.println("已写入硬盘"); // file.delete(); bool_file = false; } // 合并索引 dir = FSDirectory.open(file); logwriter = new IndexWriter(dir, new SmartChineseAnalyzer( Version.LUCENE_30), bool_file, IndexWriter.MaxFieldLength.UNLIMITED); logwriter.addIndexesNoOptimize(ram); // 日志 w.write(sum + "document have write to hd \n"); // 优化方法不存在 logwriter.setMergeFactor(3);// 优化数据段,逻辑简单,合并为一个段 logwriter.optimize(5); logwriter.close(); //创建提示索引 // 具体备份操作 // 热备份。虽说没必要,优化为以后需求,每天晚上12点 // 条件 SimpleDateFormat simple = new SimpleDateFormat("Hmmss"); String backuptime = simple.format(Calendar.getInstance().getTime()); int inttime = Integer.parseInt(backuptime); if (inttime > 220000 && inttime <= 230000) { backbool = true; } else { backbool = false; } if (inttime > 230000 && inttime < 240000 && backbool) { w.write("have backup before\n"); backup(type); w.write("have backup after\n"); backbool = false; } else { w.write("time is not 24:0:0\n"); } // backup(type); ram.close(); dir.close(); //建立、更新索引提示 // if (inttime > 040000 && inttime < 050000 && backbool) { w.write("points:create points index before\n"); setspellindex(type); w.write("points:create points index ok\n"); //} //else{ w.write("points:time no create points index\n"); //} init(); } catch (Exception e) { w.write("logrotate:" + e.toString()+"\n"); System.out.println(e.toString()); } finally { // w.close(); w.flush(); } return bool; } // 先要删除原来的快照,在进行copy public void backup(String type) throws CorruptIndexException, LockObtainFailedException, IOException { if (w == null) { getfilewriter(); } else { // w.write("writer create ok"); } IndexDeletionPolicy policy = new KeepOnlyLastCommitDeletionPolicy(); SnapshotDeletionPolicy snapshotter = new SnapshotDeletionPolicy(policy); File file_copy = new File(path + "indexdir/" + type + "copy");// 动态创建备份索引文件夹 boolean back = false; if (file_copy.exists() == false) { file_copy.mkdir(); System.out.println(file_copy.getName() + ":" + file_copy.getAbsolutePath()); back = true; w.write("back:first backup\n"); } else { //完全删除一遍 delefiles(file_copy); file_copy.mkdir(); w.write("back:not first time backup\n"); // file.delete(); } logwriter = new IndexWriter(dir, new SmartChineseAnalyzer(Version.LUCENE_30) ,snapshotter, IndexWriter.MaxFieldLength.UNLIMITED); File Source_file = null; File dis_file = null; try { IndexCommit commit = snapshotter.snapshot(); Collection<String> fileNames = commit.getFileNames(); Iterator iter = fileNames.iterator(); String copypath=null; while (iter.hasNext()) { copypath=(String) iter.next(); Source_file = new File(path + "indexdir/" + type + "/" + copypath);// 获取源文件 dis_file = new File(path + "indexdir/" + type + "copy" + "/" + copypath);// 获取目的文件 copyFile(Source_file, dis_file); // logwriter.setMergeFactor(3); w.write("backup:is creating snapshot"+copypath+"\n"); } w.write("backup:snapshot ok\n"); } catch (Exception e) { if (w == null) { getfilewriter(); } else { // w.write("writer create ok"); w.write("backup:" + e.toString() + "\n"); } } finally { snapshotter.release(); logwriter.optimize(); logwriter.setMergeFactor(3); logwriter.close(); closefile(); getfilewriter(); } } public static boolean delefiles(File file){ boolean bools=true; try{ File files[]=file.listFiles(); for(File filesingle : files){ filesingle.delete(); } } catch(Exception e){ System.out.println(e.toString()); bools=false; } return bools; } // 复制文件 public static void copyFile(File sourceFile, File targetFile) throws IOException { try { // 新建文件输入流并对它进行缓冲 inBuff = new BufferedInputStream(new FileInputStream(sourceFile)); // 新建文件输出流并对它进行缓冲 outBuff = new BufferedOutputStream(new FileOutputStream(targetFile)); // 缓冲数组 byte[] b = new byte[1024 * 5]; int len; while ((len = inBuff.read(b)) != -1) { outBuff.write(b, 0, len); } // 刷新此缓冲的输出流 outBuff.flush(); } finally { // 关闭流 if (inBuff != null) inBuff.close(); if (outBuff != null) outBuff.close(); } } // /读取内容10次/2s记录,进实时读取ram,不关闭reader:漏洞:当writer重新打开,reader获取时出错,需要同步控制,filedcache,时间近实时快 public synchronized CopyOnWriteArrayList<String> get_theme_content( String type, String theme) throws IOException { // String // path=this.getClass().getClassLoader().getResource("/").getPath();//路径 CopyOnWriteArrayList<String> list = new CopyOnWriteArrayList<String>(); if (ram == null) { init(); // System.out.println("wrter重新启动"+(sum + 1) + "ok"); } else { } if (sum % 5 != 0) { list = null; } else { int i = 0; /* * File filelog = new File(path+"indexdir/log.txt"); FileWriter w = * new FileWriter(filelog, true); */ if (w == null) { getfilewriter(); } else { // w.write("writer create ok"); } // 对固定的 reader = writer.getReader(); /* * String[] id=FieldCache.DEFAULT.getStrings(reader, "id"); String[] * sender=FieldCache.DEFAULT.getStrings(reader, "sender"); String[] * information=FieldCache.DEFAULT.getStrings(reader, "information"); * String[] date=FieldCache.DEFAULT.getStrings(reader, "date"); long * a=System.currentTimeMillis(); for(i=0;i<id.length;i++){ * list.add(id[i]); list.add(sender[i]); list.add(information[i]); * list.add(date[i]); //System.out.println("::"+i); } long * b=System.currentTimeMillis()-a; System.out.println("时间:"b); */ searcher = new IndexSearcher(reader); MapFieldSelector s = new MapFieldSelector("sender"); MatchAllDocsQuery all_query = new MatchAllDocsQuery(); TopDocs tops = searcher.search(all_query, null, 20, new Sort( new SortField("date", SortField.STRING, true))); // long a=System.currentTimeMillis(); for (ScoreDoc score : tops.scoreDocs) { i++; doc = searcher.doc(score.doc); // list.add(doc.get("id")); list.add(doc.get("information") + ":" + doc.get("date") + "\n" + " " + doc.get("sender")); // System.out.println("已有查出" + i); } // long b=System.currentTimeMillis(); // System.out.println("时间:"+a+"::"+b); w.write("content:write out " + i + "ok!\n"); // w.close(); w.flush(); } return list; } // 仅针对近视实的关闭,可不使用 public boolean get_theme_content_close() throws IOException { boolean bool = true; searcher.close(); reader.close(); return bool; } // 查询更早的对单个商品的记录 public synchronized CopyOnWriteArrayList<String> get_theme_history( String type, String theme) throws IOException { // String // path=this.getClass().getClassLoader().getResource("/").getPath();//路径 int i = 0; CopyOnWriteArrayList<String> list = new CopyOnWriteArrayList<String>(); // File filelog = new File(path+"indexdir/log.txt"); // FileWriter w = new FileWriter(filelog, true); if (w == null) { getfilewriter(); } else { // w.write("writer create ok"); } // 对固定的 // 本质上writer.getReader(); File file = new File(path + "indexdir/" + type); if (file.exists()) {// 存在 if (theme == null || theme.equals("")) { theme = "*:*"; } else { } sou_dir = FSDirectory.open(file); // FieldSelector chose=new MapFieldSelector("id",""); sou_searcher = new IndexSearcher(sou_dir); term = new Term("theme", theme); query = new TermQuery(term); TopDocs tops = sou_searcher.search(query, 1000); for (ScoreDoc score : tops.scoreDocs) { i++; doc = sou_searcher.doc(score.doc); // doc.get("id") list.add(String.valueOf(i) + ":" + doc.get("sender") + ":" + doc.get("date") + ":" + doc.get("information")); // list.add(); // list.add(); // list.add(); // System.out.println("history:"+doc.get("id")); } w.write("history:write out " + i + "ok!"+"\n"); // w.close(); w.flush(); } else { w.write("no content\n"); } return list; } // 空值问题, // 根据内容查询,type,theme,高亮显示,禁止查询时间3s(不可取),搜索内容进行分析bunweik public synchronized CopyOnWriteArrayList<String> searcher_interest( String type, String theme, String sender, String information) throws IOException, ParseException, java.text.ParseException, InvalidTokenOffsetsException { // String // path=this.getClass().getClassLoader().getResource("/").getPath();//路径 CopyOnWriteArrayList<String> list = new CopyOnWriteArrayList<String>(); int i = 0; if (information == null || information.equals("")) { information = "*:*"; } else { } // File filelog = new File(path+"indexdir/log.txt"); // FileWriter w = new FileWriter(filelog, true); if (w == null) { getfilewriter(); } else { // w.write("writer create ok"); } // 对固定的 // 本质上writer.getReader(); File file = new File(path + "indexdir/" + type); sou_dir = FSDirectory.open(file); sou_searcher = new IndexSearcher(sou_dir); QueryParser parser = new QueryParser(Version.LUCENE_30, "information", new SmartChineseAnalyzer(Version.LUCENE_30)); Query query_infor = parser.parse(information);// 解析信息 // QueryParser parser_time=new QueryParser(Version.LUCENE_30,"date",new // SmartChineseAnalyzer(Version.LUCENE_30)); parser.parse(information);// 解析信息 TermQuery theme_query = new TermQuery(new Term("theme", theme));// 商品名查询 FuzzyQuery sender_query = new FuzzyQuery(new Term("sender", sender));// 发送 WildcardQuery wildquery = new WildcardQuery(new Term("sender", "?" + sender + "*")); BooleanQuery sender_qeruy = new BooleanQuery(); sender_qeruy.add(sender_query, Occur.SHOULD); sender_qeruy.add(wildquery, Occur.SHOULD); // 时间解析范围 SimpleDateFormat simple = new SimpleDateFormat("y/MM/dd"); // Date time=simple.parse(date); // Date date_from=new Date(time.getTime() - 5 * 24 * 60 * 60 * // 1000);//五天 // String from_time=simple.format(date_from); // Query // parser_date=parser_time.parse("date:["+from_time+" TO "+date+"]"); BooleanQuery bool_query = new BooleanQuery(); bool_query.add(theme_query, Occur.MUST); // bool_query.add(parser_date,Occur.MUST); bool_query.add(query_infor, Occur.SHOULD); bool_query.add(sender_qeruy, Occur.SHOULD); TokenStream stream = null; // 时间限制,在js里面控制更好 // TopScoreDocCollector tops_col=TopScoreDocCollector.create(500,false); // Collector col=new TimeLimitingCollector(tops_col,3000); TopDocs tops = sou_searcher.search(bool_query, null, 500, new Sort( new SortField("date", SortField.STRING, true))); String text = null; TermQuery infor_query = new TermQuery(new Term("infor", information)); for (ScoreDoc score : tops.scoreDocs) { i++; doc = sou_searcher.doc(score.doc); // list.add(doc.get("id")); // 高亮显示 text = doc.get("information"); stream = (new SimpleAnalyzer()).tokenStream("infor", new StringReader(text)); QueryScorer queryscore = new QueryScorer(infor_query, "infor"); Highlighter lighter = new Highlighter(queryscore); SimpleSpanFragmenter fragmenter = new SimpleSpanFragmenter( queryscore); lighter.setTextFragmenter(fragmenter); // list.add(); list.add(String.valueOf(i) + ":" + doc.get("sender") + ":" + doc.get("date") + ":" + lighter.getBestFragment(stream, text)); // list.add(); // System.out.println(lighter.getBestFragment(stream, text)); // System.out.println("搜索结果"+doc.get("id")); } w.write("interest:write out " + i + "ok!\n"); // w.close(); w.flush(); return list; } public synchronized String setspellindex(String type) throws IOException{ File file=new File(path+"points/"+type); Directory dir=FSDirectory.open(file);//目的索引 SpellChecker spell=new SpellChecker(dir); Directory dir2=FSDirectory.open(new File(path+"indexdir/"+type));//源索引 IndexReader reader=IndexReader.open(dir2); LuceneDictionary lucene=new LuceneDictionary(reader,"theme"); /* Iterator<String> iter=lucene.getWordsIterator(); while(iter.hasNext()){ System.out.println(":"+iter.next()); }*/ spell.indexDictionary(lucene); dir.close(); dir2.close(); return path; } /*public static void main(String args[]) throws CorruptIndexException, IOException, ParseException, java.text.ParseException, InvalidTokenOffsetsException {// 测试数据 Ramwriter r = new Ramwriter(); for (int i = 1; i <= 30; i++) { r.add_doc("Digital", "酱油"+i, "sharesss" + i, "sharefssssly" + i); r.add_doc("Digital", "麻辣烫", "sharesss" , "sharefssssly" ); r.add_doc("Digital", "麻辣酱", "sharesss" , "sharefssssly" ); r.add_doc("Digital", "麻辣火锅", "sharesss" , "sharefssssly" ); } CopyOnWriteArrayList<String> list = new CopyOnWriteArrayList<String>(); list = r.searcher_interest("Digital", "share", "sharesss" , "sharefssssly" ); Iterator iter = list.iterator(); while (iter.hasNext()) { System.out.println(iter.next()); } // System.out.println(r.sum); // r.get_theme_history("Digital", "share"); r.searcher_interest("Digital", "share", "","" ); // //r.logrotate("Digital"); // r.backup("Digital"); }*/ public void run() { // TODO Auto-generated method stub } }
相关推荐
学习搜索引擎时,大家往往不能够理清搜索引擎的脉络,通过这本思维导图,大家可以理解学习思路,里面有一些代码,帮助大家更好更快地学习Lucene.
结合笔者的实际开发经验,总结了一些新的开发技巧和开发思路,并对网上流传的一些错误...本书既可为零起点的Lucene初学者提供系统全面的学习指导,也可帮助有相关经验的开发者解决在开发过程中遇到的一些难题和疑惑。
同时增加自己对不同方面知识的了解,为后续的创作提供一定的设计思路和设计启发 , 并且可以快速完成相关题目设计,节约大量时间精力,也为后续的课题创作 提供有力的理论依据、实验依据和设计依据,例如提供一些...
通过分享,希望能帮到毕业之际的大学生,提供相应的设计思路和编程思路,达到抛砖引玉的效果,也预祝各位顺利毕业,欢迎下载学习。 java+mysql毕业设计项目源码(3套) 1、[企业]Java企业进销存管理系统源码_jxc_j ...
│ 06.ssm框架整合思路.avi │ 07.ssm框架整合.avi │ 08.测试工程.avi │ 打开必读.txt │ 淘淘商城第一天笔记.docx │ ├─02.第二天 │ 07.商品类目选择完成.avi │ 01.课程计划.avi │ 02.展示首页.avi │ 03....
技术选型方面前端采用Angularjs,其他的框架有Lucene(搜索)、Velocity(页面模板)、EhCache(缓存)。其中参考了 Struts 的 DispatchAction 的思路自己写了一个只有一个 Java 文件的控制器,还有一个U
Java数据压缩与传输实例,可以学习一下实例化套按字、得到文件输入流、压缩输入流、文件输出流、实例化缓冲区、写入数据到文件、关闭输入流、关闭套接字关闭输出流、输出错误信息等Java编程小技巧。 Java数组倒置...
系统设计 1 jive设计思路 2 jive的工作内幕 3 Jive源代码研究 4 Jive中的设计模式 5 jive学习笔记 <br> 设计模式 1 大道至简-Java之23种模式一点就通 2 设计模式...
系统设计 1 jive设计思路 2 jive的工作内幕 3 Jive源代码研究 4 Jive中的设计模式 5 jive学习笔记 <br> <br> 数据库设计 1 Jive Forums数据库说明(英文) 2 Jive KB...
笔者当初为了学习JAVA,收集了很多经典源码,源码难易程度分为初级、中级、高级等,详情看源码列表,需要的可以直接下载! 这些源码反映了那时那景笔者对未来的盲目,对代码的热情、执着,对IT的憧憬、向往!此时此...