To get the best results from your AI tool think about using as good a prompt as possible.
The information, sentences, or questions that you enter into a genAI tool (“prompts”) are a big influence on the quality of outputs you receive. After you enter a prompt, the AI model analyzes your input and generates a response based on the patterns it has learned through its training. More descriptive prompts can improve the quality of the outputs.
See the CLEAR Prompt framework for using prompts with AI at the end of this guide. Or try Getting started with prompts for genAI
Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. It has recently announced Semantic Reader which is an augmented reader to help make research articles more accessible and contextual.
Open Knowedge Maps: is an AI-based visual interface that can help increases the visibility of research findings for science. It searches and visualizes content from PubMed and BASE resources, based on subject relevance. Open Knowledge maps provide an instant overview of a topic by showing the main areas at a glance, and journal articles and documents related to each area. This makes it possible to easily identify useful, pertinent information. Free of charge.
Connected Papers: Connected Papers is a unique, visual tool to help researchers and applied scientists find and explore papers relevant to their field of work.
Keenious: This tool analyses the text you already have written and makes suggestions of relevant articles.
Research Rabbit: is a free “citation-based literature mapping tool” available online. Start by using one or more papers (called seed papers), and the app will find more papers relevant to the topic or author of interest (which is dictated by the seed papers you previously selected). The software scans for any publicly available (open access) source online and selects papers based on their similarities. Data sources include Semantic Scholar and PubMed. Save and export results to Zotero.
Remember: these are all evolving tools. They tend to scrape data from similar open research sources. They are also subject to hallucinations and issues around security, copyright and sustainability. Always evaluate your results and use in conjunction with a peer reviewed database such as Medline, Scopus or Web of Science.
LitMaps: recommends papers based on an initial starting or 'seed' article. Search for your topic, select an article, and LitMaps will find the top 100 relevant articles based on connection and citations. Data sources include Semantic Scholar, Open Alex, Cross Ref and preprint services including PubMed and MedRxiv. Save and use with Zotero reference manager. Note: subscription required to access advanced features.
Elicit is a secondary research assistant that uses language models like GPT to automate parts of researchers’ workflows. Currently, the main workflow in Elicit is the literature review. If you ask a question, or upload a paper found using other means, Elicit will show other relevant or related papers along with summaries of key information about those papers in an easy-to-use table. Save and export results to Zotero. The free account come with only 5,000 "credits" for a short trial period.
SciSpace research assistant offers:
Selecting the right AI tools for academic and research writing is a crucial decision that can significantly impact the effectiveness and quality of your work. With a myriad of options available, it’s important to consider several key factors to ensure that the tool you choose aligns well with your specific needs and academic standards. Think about:
Information Search
Google Scholar: is a specialized search engine for links to scholarly articles, conference papers, and patents. It provides an overview of research material reaching back to the mid 20th century. It relies on keyword search and matching, presenting users with a list of relevant academic content.
ChatGPT and other genAI employs a conversation-based approach based on probabilistic large language models. Users can pose queries in natural language, and it aims to understand user intent and provide organized responses in complete sentences.
User Experience:
Google Scholar: While Google Scholar provides comprehensive results, users must filter through search results individually, which can be time-consuming. Some self-citing occurs and citations and impact metrics can be inflated. It's not a 'one-stop-shop' solution and doesn't index every published peer review journal. Full-text access can also be an issue.
ChatGPT: offers a user-friendly and intuitive search experience. It understands context and provides human-like answers, saving users time. However it can show hallucinations in it's answers and it does not always use academic appropriate resources to answer questions. Prompt engineering can help get more detailed answers from a LLM..
Fact-Checking and Trust:
Google Scholar: Google Scholar focuses on academic content but doesn't fact-check information. The user needs to evaluate their results and be aware of retracted papers and , predatory publishers as well as issues around currency and accuracy and author expertise.
ChatGPT: provides straightforward questions and general solutions but falls short in fact-checking tasks. Again the user needs to evaluate their results and be aware of retractions and predatory publishers as well as issues around currency and accuracy and author expertise.
In summary, Google Scholar is a specialized source for scholarly content, while ChatGPT offers a conversational search experience useful for summarizing non-specialised general information.
This work is licensed under CC BY-NC-SA 4.0