Home

Build with AI on Google Cloud - Session 2 - GenAI Deep Dive

πŸš€ Build with AI on Google Cloud - Session #2 - GenAI Deep Dive πŸš€

This video marked the second session in a five-part study series that focused on building with AI on Google Cloud, specifically Generative AI. It was a collaborative event hosted by GDG Seattle, GDG Sur, GDG Vancouver, and GDG PAB.


🀝 Introductions and Collaborations

Organizers from GDG Seattle, GDG Sur, and GDG Vancouver introduced their respective groups. GDG Seattle, boasting nearly 7,000 members since 2010, highlighted its history of hosting events like IO Extended and DevFest.


πŸ“š Study Series Overview

The series utilized Google Cloud Skills Boost for self-paced learning, complemented by talks and Q&A sessions. This particular session covered the ”Generate Smarter GenAI Output” learning path. Attendees learned that Google Cloud Skills Boost offered a range of resources including courses, videos, quizzes, and labs, with free access provided for those who RSVPed.


πŸ–ΌοΈ Talk 1: Imagine 3 Deep Dive

Margaret provided an in-depth discussion of Google’s Imagine 3, a text-to-image model. She detailed its capabilities, including image generation, editing (in-painting, out-painting), and customization. The model was noted for its photo-realism and ability in text rendering.

Margaret explained that Imagine 3 was accessible through platforms like Google Labs (ImageFX), the Google Gemini app, and Google Cloud Vertex AI. She demonstrated several uses:

  • Fabric design generation.
  • Image editing using masks.
  • Background changes through prompts.
  • Generating images using reference images.

She also briefly touched upon V2 and Video FX.


🧠 Talk 2: AI Foundations - From Embeddings to RAG

Annie from Google delivered a presentation on the fundamental concepts of embeddings and Retrieval Augmented Generation (RAG).

She described embeddings as low-dimensional numeric representations of information, emphasizing their role in capturing meaning and placing semantically similar items close together in the embedding space.

Vector Search was explained as a method to find the top β€œk” most similar results based on their proximity within the embedding space, with mention of multimodal embedding for cross-modal search.

Annie thoroughly explained RAG as a technique to mitigate LLM hallucination by incorporating relevant external data into the generation process, thereby enhancing accuracy with context-specific information.


❓ Q&A Session

Pretty moderated the question and answer period. Topics discussed included:

  • Sharing Imagine 3 prompts and presentation slides.
  • Issues related to Cloud Skills Boost credits.
  • The accuracy of embedding vector search.
  • The process of setting up Vector databases for RAG.
  • An explanation of CLIP.

πŸ“£ Call to Action

Attendees were encouraged to join the Discord channel for ongoing support. Participants were reminded to continue their studies at their own pace, and an invitation was extended to RSVP for the upcoming session in the series.


Watch the full video here https://www.youtube.com/watch?v=gOMoT2NmQOY

Published Feb 5, 2025