Google develops Med-Gemini AI-Models that specialize in Medicine

Main Image
  • Like
  • Comment
  • Share

Google has recently introduced a new family of Artificial Intelligence models that has medicine as its main focus. Named Med-Gemini, these AI models are not available to the public, but Google has published a pre-print version of the research paper that focuses on its capabilities and methodologies. Google claims that this new AI model will surpass GPT-4 in benchmark testing and its notable feature will be the long-context ability to process and analyze health records and research papers.

ALSO SEE: Best Mileage Cars in India

Med-Gemini Research Paper

ALSO SEE: Headphones Under ₹1,500

The research paper is currently in the pre-print stage and published on an open-access online repository of scholarly papers. Google DeepMind and Research’s Chief Scientist, Jeff Dean shared a post on microblogging platform X, claiming, “I’m very excited about the possibilities of these models to help clinicians deliver better care, as well as to help patients better understand their medical conditions. AI for healthcare is going to be one of the most impactful application domains for AI, in my opinion.”

ALSO SEE: Sony IMX Sensor Mobile Phones

The Med-Gemini Research Models have been built on top of Gemini 1.0 and Gemini 1.5 LLM. There are four models in totality, namely the Med-Gemini-S1.0, Med-Gemini-M 1.0, Med-Gemini-L 1.0 and Med-Gemini-M 1.5. All four of these can offer text, video, and image outputs and are integrated with web search, which Google claims have been made better through self-training to make the models “more faculty accurate, reliable and nuanced” when showing results for complex clinical reasoning tasks.

The AI model has been fine-tuned for better performance during long-context processing, claims Google. This would mean more accurate and pinpointed answers by chatbot even when questions are not perfectly framed or queried or need a documented medical record.

ALSO SEE: Tablets With Qualcomm Snapdragon CPU

ALSO SEE: Mobile Phones With UFS 4 Storage

According to Google’s data, Med-Gemini AI models have performed better than OpenAI’s GPT-4 models in the GeneTuring dataset on text-based reasoning tasks. Med-Gemini-L 1.0 has scored 91.1% accuracy on MedQA which betters its predecessor Med-PaLM 2 by 4.5%.

These new AI models are not available for public or any kind of beta testing as of now. Google is expected to improve these even more before launching them into the public arena.

You can follow Smartprix on TwitterFacebookInstagram, and Google News. Visit smartprix.com for the latest tech and auto newsreviews, and guides

Shivangi AgarwalShivangi Agarwal
Shivangi is an honours graduate in English from Delhi University with a passion for reading and writing. Always keen to know more about the latest gadgets, when she is not reading about tech, she loves listening to Hindi music and grooving to the latest Hindi beats.

Related Articles

Imagevivo X Fold5 is the World’s First Android to Support Apple Watch Connectivity

vivo has started officially teasing its upcoming foldable, the X Fold5, and this time it’s not just about hardware upgrades. In a Weibo post from vivo executive Han Boxiao, the company confirmed that the X Fold5 will feature native support for Apple Watch, something almost unheard of on Android.  This means Apple Watch users won’t …

ImageWhat Is Google’s Gemini AI Model Capable Of? Five Interesting Use Cases Explored

In the race to deploy the most advanced AI-based language model, OpenAI (and its largest investor, Microsoft) and Google aren’t ready to slow down. Recently, OpenAI dropped the GPT-4 update, integrating several new abilities like data interpretation, image recognition, and more. Now, the Alphabet-owned tech giant has come up with its most advanced LLM, Gemini. …

ImageGoogle Gemini API Is Now Available For Developers

Recently, Google introduced the Gemini language model with advanced multimodal capabilities. On December 13, the company announced that it is making one of the three Gemini models, i.e., Gemini Pro, available to developers and organizations, along with a range of other AI tools, models, and infrastructure. Developers looking to try Gemini Pro can use the …

ImageGoogle Shows Off Gemini-Powered Android XR Glasses Featuring Camera at I/O 2025

Google is currently hosting I/O 2025 in California, where the tech giant has unveiled several updates. These updates include the introduction of new Gemini Models, Google Beam, Veo 3, Imagen 4 tools, and more. A key highlight of the conference is the preview of Gemini-Powered Android XR Glasses, developed in collaboration with Samsung. Google has …

ImageGoogle Gemini Will Now Be Available on Smartwatches, Smart TVs, Cars & More

Google is preparing to host its annual developer conference, I/O 2025, on May 20th and 21st. Prior to this event, the company conducted an ‘Android Show’ yesterday, where several significant announcements were made. Google introduced Android 16, Wear OS 6, advancements in Gemini, and additional AI-driven security features during the show. A key highlight was …

Discuss

Be the first to leave a comment.

Related Products