Google I/O 2023: PaLM 2 Announced With Gecko, Otter, Bison, Unicorn & Med-PaLM 2, Gemini & DeepMind #256
FurkanGozukara
announced in
Tutorials
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Google I/O 2023: PaLM 2 Announced With Gecko, Otter, Bison, Unicorn & Med-PaLM 2, Gemini & DeepMind
Full tutorial: https://www.youtube.com/watch?v=1UvUjTaJRz0
In today's #GoogleIO2023, we are given a sneak peek into the future of AI technology. Google unveiled their latest AI model, Palm 2, and showcased how it improves knowledge and learning, boosts creativity and productivity, enables developers and businesses to build transformative products and services, and importantly, how it ensures the responsible use of AI.
Palm 2 is an advanced model, offering excellent foundational capabilities and is designed to be highly efficient. They are launching over 25 products and features powered by Palm 2. The new model is stronger in logic and reasoning, trained on a broad range of scientific and mathematical topics and also understands and generates nuanced results in over 100 languages.
Special mention goes to the specialized versions of Palm 2, such as SecPalm for security use cases and MedPalm 2 for medical applications. MedPalm 2, for instance, has been demonstrated to perform at the expert level on medical licensing exam-style questions.
Google has also announced their next-generation foundation model, Gemini, which is currently in training. The company is investing heavily in AI responsibility, including the tools to identify synthetically generated content.
Finally, they showcased Bard, their experiment for conversational AI, now fully running on Palm 2.
Stay tuned for more updates and let's delve into the future of AI together.
Source Google. Full Event (10 May 2023)⤵️
https://www.youtube.com/watch?v=cNfINi5CNbY
Our Discord server⤵️
https://bit.ly/SECoursesDiscord
If I have been of assistance to you and you would like to show your support for my work, please consider becoming a patron on 🥰⤵️
https://www.patreon.com/SECourses
Technology & Science: News, Tips, Tutorials, Tricks, Best Applications, Guides, Reviews⤵️
https://www.youtube.com/playlist?list=PL_pbwdIyffsnkay6X91BWb9rrfLATUMr3
Playlist of StableDiffusion Tutorials, Automatic1111 and Google Colab Guides, DreamBooth, Textual Inversion / Embedding, LoRA, AI Upscaling, Pix2Pix, Img2Img⤵️
https://www.youtube.com/playlist?list=PL_pbwdIyffsmclLl0O144nQRnezKlNdx3
#Google #GoogleIO #ArtificialIntelligence #Palm2 #AI #DeepMind #MachineLearning #AIResponsibility #SecPalm #MedPalm #Bard #Gemini
Video Transcription
00:00:00 Looking ahead, making AI helpful for everyone is the most profound way we will advance our mission.
00:00:08 We are doing this in four important ways.
00:00:11 First, by improving your knowledge and learning and deepening your understanding of the world.
00:00:17 Second, by boosting creativity and productivity so you can express yourself and get things done.
00:00:24 Third, by enabling developers and businesses to build their
00:00:27 own transformative products and services. And finally, by building and deploying AI
00:00:33 responsibly so that everyone can benefit equally. We are so excited by the opportunities ahead.
00:00:40 Our ability to make AI helpful for everyone relies on continuously advancing our foundation models.
00:00:49 So I want to take a moment to share how we are approaching them.
00:00:54 Last year, you heard us talk about Palm, which led to many improvements across our products.
00:01:00 Today, we are ready to announce our latest Palm model in production,
00:01:05 Palm 2. Palm
00:01:13 2 builds on our fundamental research and our latest infrastructure.
00:01:18 It's highly capable at a wide range of tasks and easy to deploy.
00:01:24 We are announcing over 25 products and features powered by Palm 2 today.
00:01:29 Palm 2 models deliver excellent foundational capabilities across a wide range of sizes.
00:01:35 We have affectionately named them Gecko, Otter, Bison and Unicorn.
00:01:42 Gecko is so lightweight that it can work on mobile devices.
00:01:47 Fast enough for great interactive applications on device, even when offline.
00:01:53 Palm 2 models are stronger in logic and reasoning,
00:01:56 thanks to broad training on scientific and mathematical topics.
00:02:01 It's also trained on multilingual text spanning over 100 languages,
00:02:06 so it understands and generates nuanced results. Combined with powerful coding capabilities,
00:02:13 Palm 2 can also help developers collaborating around the world.
00:02:17 Let's look at this example. Let's say you're working with a
00:02:21 colleague in Seoul and you're debugging code. You can ask it to fix a bug and help out your
00:02:28 teammate by adding comments in Korean to the code. It first recognizes the code is recursive,
00:02:35 suggests effects, and even explains the reasoning behind effects.
00:02:40 And as you can see, it added comments in Korean, just like you asked.
00:02:51 While Palm 2 is highly capable, it really shines when fine-tuned on domain-specific knowledge.
00:02:59 We recently released SecPalm, a version of Palm 2 fine-tuned for security use cases.
00:03:05 It uses AI to better detect malicious scripts and can help security experts
00:03:11 understand and resolve threats. Another example is MedPalm 2.
00:03:16 In this case, it's fine-tuned on medical knowledge.
00:03:19 This fine-tuning achieved a 9x reduction in inaccurate reasoning when compared to the model.
00:03:26 Approaching the performance of clinician experts who answered the same set of questions.
00:03:32 In fact, MedPalm 2 was the first language model to perform at expert level on medical
00:03:38 licensing exam-style questions and is currently the state of the art.
00:03:43 We are also working to add capabilities to MedPalm 2 so that it can synthesize information from
00:03:49 medical imaging like plain films and mammograms. You can imagine an AI collaborator that helps
00:03:56 radiologists interpret images and communicate the results.
00:04:01 These are some examples of Palm 2 being used in specialized domains.
00:04:05 We can't wait to see it used in more. That's why I'm pleased to announce that
00:04:10 it is now available in preview and I'll let Thomas share more.
00:04:22 Palm 2 is the latest step in our decade-long journey to bring AI
00:04:26 in responsible ways to billions of people. It builds on progress made by two world-class
00:04:32 teams, the Brain team and DeepMind. Looking back at the defining AI
00:04:37 breakthroughs over the last decade, these teams have contributed to a significant number of them.
00:04:42 AlphaGo, transformers, sequence-to-sequence models, and so on.
00:04:48 All this helps set the stage for the inflection point we are at today.
00:04:52 We recently brought these two teams together into a single unit Google DeepMind.
00:04:58 Using the computational resources of Google,
00:05:01 they have focused on building more capable systems safely and responsibly.
00:05:07 This includes our next generation foundation model, Gemini, which is still in training.
00:05:13 Gemini was created from the ground up to be multimodal,
00:05:18 highly efficient at tool and API integrations, and built to enable future innovations like
00:05:25 memory and planning. While still early,
00:05:28 we are already seeing impressive multimodal capabilities not seen in prior models.
00:05:34 Once fine-tuned and rigorously tested for safety, Gemini will be available at various
00:05:40 sizes and capabilities, just like Palm2. As we invest in more advanced models,
00:05:47 we are also deeply investing in AI responsibility. This includes having the tools to identify
00:05:54 synthetically generated content whenever you encounter it.
00:05:59 Two important approaches are watermarking and metadata.
00:06:04 Watermarking embeds information directly into content in ways that are maintained
00:06:10 even through modest image editing. Moving forward, we are building our
00:06:15 models to include watermarking and other techniques from the start.
00:06:21 If you look at the synthetic image, it's impressive how real it looks, so you can imagine
00:06:26 how important this is going to be in the future. Metadata allows content creators to associate
00:06:33 additional context with original files, giving you more information whenever you encounter an image.
00:06:41 We'll ensure every one of our AI-generated images has that metadata.
00:06:47 James will talk about our responsible approach to AI later.
00:06:53 As models get better and more capable, one of the most exciting opportunities
00:06:59 is making them available for people to engage with directly.
00:07:04 That's the opportunity we have at Bard, our experiment for conversational AI.
00:07:10 We are a rapidly evolving board. It now supports a wide range of
00:07:14 programming capabilities, and it's gotten much smarter at reasoning and math problems.
00:07:21 And as of today, it is now fully running on Palm 2.
00:07:26 To share more about what's coming, let me turn it over to Sissy.
Beta Was this translation helpful? Give feedback.
All reactions