Applying Machine Learning to the Task of Generating Search Queries
Main Article Content
Abstract
In this paper we research two modifications of recurrent neural networks – Long Short-Term Memory networks and networks with Gated Recurrent Unit with the addition of an attention mechanism to both networks, as well as the Transformer model in the task of generating queries to search engines. GPT-2 by OpenAI was used as the Transformer, which was trained on user queries. Latent-semantic analysis was carried out to identify semantic similarities between the corpus of user queries and queries generated by neural networks. The corpus was convert-ed into a bag of words format, the TFIDF model was applied to it, and a singular value decomposition was performed. Semantic similarity was calculated based on the cosine measure. Also, for a more complete evaluation of the applicability of the models to the task, an expert analysis was carried out to assess the coherence of words in artificially created queries.
Article Details
References
2. Xie Z. Neural Text Generation: A Practical Guide.
URL: https://arxiv.org/pdf/1711.09534.pdf
3. A Comprehensive Guide to Natural Language Generation, 2019. URL: https://medium.com/sciforce/a-comprehensive-guide-to-natural-language-generation-dd63a4b6e548
4. Arrington M. AOL proudly releases massive amounts of user search data, 2006. URL: https://techcrunch.com/2006/08/06/aol-proudly-releases-massive-amounts-of-user-search-data/
5. Reiter E. NLG vs Templates: Levels of Sophistication in Generating Text, 2016. URL: https://ehudreiter.com/2016/12/18/nlg-vs-templates
6. Gagniuc P. Markov Chains: From Theory to Implementation and Experimentation, 2017. USA, NJ: John Wiley & Sons.
7. Press O., Bar A., Bogin B., Berant J., Wolf L. Language Generation with Recurrent Generative Adversarial Networks without Pre-training. URL: https://arxiv.org/pdf/1706.01399.pdf
8. Williams R.J., Hinton G.E., Rumelhart D.E. Learning representations by back-propagating errors. URL: http://www.cs.utoronto.ca/~hinton/absps/naturebp.pdf
9. Hochreiter S., Bengio Y., Frasconi P., Schmidhuber J. Gradient Flow in Recurrent Nets: the Difficulty of Learning Long-Term Dependencies.
URL: https://www.bioinf.jku.at/publications/older/ch7.pdf
10. Hochreiter S., Schmidhuber J. Long-Short Term Memory. URL: http://web.archive.org/web/20150526132154/http:// deeplearning.cs.cmu.edu/pdfs/Hochreiter97_lstm.pdf
11. Heck J., Salem F. Simplified Minimal Gated Unit Variations for Recurrent Neural Networks. URL: https://arxiv.org/abs /1701.03452
12. Bahdanau D., Cho K.m Bengio Y. Neural Machine Translation by Jointly Learning to Align and Translate. URL: https://arxiv.org/pdf/1409.0473.pdf
13. Felbo B., Mislove A., Søgaard A., Rahwan I., Lehmann S. Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm. URL: https://arxiv.org/pdf/1708.00524.pdf
14. Bisong E. Google Colaboratory. In: Building Machine Learning and Deep Learning Models on Google Cloud Platform, 2019. Apress, Berkeley, CA.
15. Chollet F. Keras, 2015. URL: https://keras.io
16. Kingma D., Ba J. Adam: A Method for Stochastic Optimization. URL: https://arxiv.org/abs/1412.6980
17. Learning Rate Scheduler. URL: https://keras.io/api/callbacks/learning_rate_ scheduler/
18. Schuster M., Paliwal K. Bidirectional recurrent neural networks. URL: https://www.researchgate.net/publication/ 3316656_Bidirectional_recurrent_neural_networks
19. Vaswani A., Shazeer N., Parmar N., Uszkoreit J., Jones L., Gomez A., Kaizer L., Polosukhin I. Attention Is All You Need. URL: https://arxiv.org/pdf/1706.03762.pdf
20. Radford A., Wu J., Child R., Luan D., Amodei D., Sutskever I. Language Models Are Unsupervised Multitask Learners. URL: https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf
21. Devlin J., Chang M., Lee K., Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. URL: https://arxiv.org/pdf/1810.04805.pdf
22. Brown T., Mann B., Ryder N., Subbiah M., Kaplan J. Language Models Are Few-Shot Learners. URL: https://arxiv.org/abs/2005.14165
23. Gage P. A New Algorithm for Data Compression. URL: https://www.derczynski.com/papers/archive/BPE_Gage.pdf
24. Deerwester S., Harshman R. Indexing by Latent Semantic Analysis. URL: https://www.cs.bham.ac.uk/ ~pxt/IDA/lsa_ind.pdf
25. Nakov P. Getting Better Results with Latent Semantic Indexing. URL: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.59.6406&rep=rep1&type=pdf
26. Rehurek R., Sojka P. Software Framework for Topic Modelling with Large Corpora // Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks. University of Malta. 2010.
Presenting an article for publication in the Russian Digital Libraries Journal (RDLJ), the authors automatically give consent to grant a limited license to use the materials of the Kazan (Volga) Federal University (KFU) (of course, only if the article is accepted for publication). This means that KFU has the right to publish an article in the next issue of the journal (on the website or in printed form), as well as to reprint this article in the archives of RDLJ CDs or to include in a particular information system or database, produced by KFU.
All copyrighted materials are placed in RDLJ with the consent of the authors. In the event that any of the authors have objected to its publication of materials on this site, the material can be removed, subject to notification to the Editor in writing.
Documents published in RDLJ are protected by copyright and all rights are reserved by the authors. Authors independently monitor compliance with their rights to reproduce or translate their papers published in the journal. If the material is published in RDLJ, reprinted with permission by another publisher or translated into another language, a reference to the original publication.
By submitting an article for publication in RDLJ, authors should take into account that the publication on the Internet, on the one hand, provide unique opportunities for access to their content, but on the other hand, are a new form of information exchange in the global information society where authors and publishers is not always provided with protection against unauthorized copying or other use of materials protected by copyright.
RDLJ is copyrighted. When using materials from the log must indicate the URL: index.phtml page = elbib / rus / journal?. Any change, addition or editing of the author's text are not allowed. Copying individual fragments of articles from the journal is allowed for distribute, remix, adapt, and build upon article, even commercially, as long as they credit that article for the original creation.
Request for the right to reproduce or use any of the materials published in RDLJ should be addressed to the Editor-in-Chief A.M. Elizarov at the following address: amelizarov@gmail.com.
The publishers of RDLJ is not responsible for the view, set out in the published opinion articles.
We suggest the authors of articles downloaded from this page, sign it and send it to the journal publisher's address by e-mail scan copyright agreements on the transfer of non-exclusive rights to use the work.