As a developer constantly exploring new tools, I recently put two AI-powered code generation platforms to the test: Google AI Studio’s new “Build” feature and “bolt.new.” My challenge for both? Create a spinning 3D dice application that lands on a random face on a green tabletop. And let me tell you, Google AI Studio came out on top, delivering a surprisingly polished and effective result.
My goal was clear:
I wanted something more than a flat animation; I envisioned a truly immersive experience.
When I tasked bolt.new with the challenge, its initial thought process was promising. It quickly laid out a plan for a React application with realistic physics and animations. It even considered using react-three-fiber
for the 3D rendering, which showed a good understanding of the complexities involved.
The initial build was impressive, setting up a working React app with a beautiful green felt table and a dice that spun. However, it had a minor hiccup: the dice initially landed only on a ‘1’. I quickly prompted it to “have the dice land on a random face not just 1. Also lower the table in the view.”
Bolt.new responded poorly, not fixing the random face issue or adjusting the table’s perspective. It did provide the Netlify deployment link, which was super convenient!
Here’s the result from bolt.new: https://effulgent-stroopwafel-1ced32.netlify.app/
While functional, the animation felt a bit less dynamic, and the overall “throw” and “spin” lacked the fluid realism I was hoping for. It was a good effort, but it didn’t quite capture the visual flair I imagined.
Next, it was Google AI Studio’s “Build” feature’s turn. From the get-go, its “Thought” process was incredibly detailed and thorough. It considered everything from component structure (Dice, Table, custom hooks) to animation logic using CSS keyframes and state management. What really stood out was its focus on “Refining Animation Mechanics” and “Designing Realistic Roll Dynamics.” It went through multiple iterations of planning and refining the animation, demonstrating a deeper understanding of visual fidelity.
Instead of jumping straight into a 3D library like react-three-fiber
, Google AI Studio explored achieving the effect primarily with CSS transforms, which can often lead to lighter, more performant applications. Its detailed thought process on “Focusing Animation Implementation” and “Defining Detailed Animation” showed a true commitment to crafting a compelling visual experience.
The final build from Google AI Studio was a testament to its meticulous planning. The dice not only spun but felt like it was genuinely “thrown” before landing with a satisfying thud on the green table. The random face generation was seamless, and the overall animation was incredibly smooth and engaging.
You can explore the Google AI Studio app and its build notes here: https://aistudio.google.com/apps/drive/1D0G3u4pwglArnwUhArhXurAFOB87lfiU?showPreview=true
For this specific task, Google AI Studio’s “Build” feature excelled due to a few key reasons:
While bolt.new is a capable tool, Google AI Studio’s “Build” feature demonstrated a more advanced capability in understanding complex animation requirements and translating them into high-quality code, along with offering a more integrated deployment experience. It truly felt like it was “building” with an artistic eye, not just compiling instructions.
I’m genuinely impressed with the new “Build” feature in Google AI Studio. For tasks requiring nuanced visual effects and detailed animation, it seems to have a significant edge. It’s an exciting development for developers looking to accelerate their creative coding processes.
Have you tried Google AI Studio’s new “Build” feature? What are your thoughts? I’d love to hear about your experiences in the comments below! 👇