AI Tools and Resources for Biomedical Research

Definitions

  • Agenic AI: Agentic AI, an emerging paradigm in artificial intelligence, refers to autonomous systems designed to pursue complex goals with minimal human intervention. Unlike traditional AI, which depends on structured instructions and close oversight, Agentic AI demonstrates adaptability, advanced decision-making capabilities and self-sufficiency, enabling it to operate dynamically in evolving environments. D. B. Acharya, et al, Agentic AI: Autonomous Intelligence for Complex Goals—A Comprehensive Survey," in IEEE Access, vol. 13, pp. 18912-18936, 2025.
  • Artificial Intelligence: a term coined by emeritus Stanford Professor John McCarthy in 1955, was defined by him as “the science and engineering of making intelligent machines”. Much research has humans program machines to behave in a clever way, like playing chess, but, today, we emphasize machines that can learn, at least somewhat like human beings do. Stanford University Human-Centered Artificial Intelligence
  • Chatbot: A software application or web interface that mimics human conversation through text or voice interactions. Stanford Medicine Magazine
  • Deep Learning: The use of large multi-layer (artificial) neural networks that compute with continuous (real number) representations, a little like the hierarchically organized neurons in human brains. It is currently the most successful ML approach, usable for all types of ML, with better generalization from small data and better scaling to big data and compute budgets. Stanford University Human-Centered Artificial Intelligence
  • Generative AI: AI models that learn the patterns and structure of their input training data (text, images or other media) and then generate new data having similar characteristics or perform tasks they were never trained to do. Stanford Medicine Magazine
  • Large Language Models (LLMs):  A type of AI model that’s trained on massive amounts of data and can be easily adapted to perform a wide range of tasks. Some examples are the models that power chatbots like OpenAi’s ChatGPT and Google’s Bard. Stanford Medicine Magazine
  • Natural Language Processing: A branch of AI that uses machine learning to process and interpret text and data. It represents the ability of a program to understand human language as it is spoken and written. Stanford Medicine Magazine
  • Predictive AI: Artificial intelligence systems that utilize statistical analysis and machine learning algorithms to make predictions about potential future outcomes, causation, risk exposure, and more. Carnegie Council for Ethics in International Affairs
  • Training Data: Labeled data used in the training process to "teach" an AI model or algorithm to make a decision. For example, with an AI model for self-driving vehicles, training data may include images and videos in which traffic signs, pedestrians, bicyclists, vehicles, and so on are labeled. Stanford Teaching Commons

History

  • The Council of Europe’s AI web page provides a concise history of AI. 
  • This AI site from Harvard University is also worth a look.
  • Forbes has a history of “AI” that begins in 1308! 
  • In the history of AI from the 1940s until the present there have been many significant contributors we will mention three:
    • Alan Turing: The English polymath famous for breaking the Enigma code of the Germans in WWII, later wrote Computing Machine Intelligence in which he proposed the Imitation Game, later called the Turing Test in which a machine was judged intelligent if an observer could not determine which of two players was the machine based on text communication between them.
    • John McCarthy came to Stanford in 1962 after co-coining the term artificial intelligence in a 1955 proposal for the seminal meeting of AI at Dartmouth in the summer of 1956. He retired from Stanford in 2000 and, during his career, made major contributions to AI.
    • Arthur Samuel coined the term machine learning in 1959 and after a career in academia and industry, he retired from IBM and joined the Stanford faculty in 1966. He was an innovator in computer gaming and developed a self-learning checkers program.
  • This 13-minute video reviews 60 years of AI research at Stanford.    
  • From the Association for Computing Machinery website:
    • “The A.M. Turing Award, the Association for Computing Machinery’s (ACM) most prestigious technical award, is given for major contributions of lasting importance to computing.
    • The list of winners from the award's creation in 1966 to present is found at the link below. It contains biographical information, a description of their accomplishments, straightforward explanations of their fields of specialization, and text or video of their A. M. Turing Award Lecture.” 

Classification

There are several schemes for classifying AI, here are three that seem widely cited:

Based on subsets of the larger term “artificial intelligence," these schemes vary a bit and contain more or less categories. Four to seven sub-divisions seem like the range.

This list is synthesized from several sources:

  • Artificial Intelligence: the overarching concept in which a machine performs a task that normally would require human intelligence.
  • Machine learning: a subset of artificial intelligence in which the machine learns from experience or training data to improve its performance without specifically being programmed to do so.
  • Deep learning: a subset of machine learning in which the structures of the neural network comprising the artificial intelligence are more complex (the network has more layers).
  • Neural networks use knowledge of brain anatomy to design a circuit structure that mimics the brain.
  • Natural language processing: Using machine learning and deep learning, an artificial intelligence can understand, manipulate, and respond to human language.
  • Robotics is the subset of artificial intelligence systems deployed to interact with the physical world.
  • Genetic algorithms are based on “survival of the fittest." Amongst all the ways to optimize data to solve a problem some work, others fail. The most “fit” algorithms survive.