I'm currently teaching at the Strasbourg Chamber of Commerce on a topic that isn't directly related to AI, at least not on the surface: deploying applications in DevOps mode on the Google Cloud Platform... I say "on the surface" only, because the Google Cloud Platform (GCP for those in the know) is packed with generative AI APIs, machine learning tools, and more.
Recently, a new product appeared on GCP: Duet AI. The students I'm currently training will later be tasked with sysadmin work, application deployment, IAM management, VPCs, LDAP, the whole program! Something became very clear to me as I watched them learn new concepts like Dockerizing application stacks, CI/CD pipelines, and cloud permission management: modern distributed systems are complex for newcomers, and even for seasoned DevOps practitioners. Could Duet AI help them?
Duet AI is Google's version of Copilot. It differs from its GitHub competitor in that, true to Google's long-standing strategy, it's natively integrated into the Google Cloud Platform: you can invoke Duet AI on any page of the GCP UI to ask a question or perform a task.
The tool can also generate code in an IDE, similar to Copilot, making it a multi-faceted tool that serves developers in their editor as well as the "button-clickers" of cloud platforms, as one of my colleagues (whose identity shall remain secret) likes to call them.
Today, I'm more comfortable with certain concepts, which means I can manage across several platforms (Azure, AWS, etc.) without being an expert. I won't pretend this didn't come at the cost of countless hours studying seemingly endless nested concepts. Back then, ChatGPT either didn't exist or hadn't reached its current level of performance; it would have been a tremendous help for my learning!
We recently had a discussion at Reboot-Conseil about providing AI-powered code assistance tools like Copilot: some of us think that developers need to go through those long nights reading obscure documentation and making mistakes to build a certain maturity and the necessary reflexes. The argument holds perfectly because the process is indeed very formative. Others (namely me) thought that new professionals should use these tools as early and as often as possible to maximize their productivity and be profitable for their company and, above all, to simply have fun! So I'm thinking about how to familiarize students with prompt engineering for problem-solving, documentation, and using LLMs as a thinking aid (a rubber duck on steroids, if you will). I'm going to test Duet AI to form an opinion before telling them about it.
This article will cover my first hands-on experience.
Staying in the spirit of my training, I'll go straight to the use-cases tab in the documentation and try to deploy something!
The Duet AI tutorials available cover:
Data analysis
Developing a Python app
Creating a private build environment
Developing a Node.js app
Architecting a multi-layer application
Creating and modifying a VPC network
Creating and modifying a Kubernetes cluster
Not bad at all, but as the AI squad, what I want is to:
Deploy a VM with a GPU
Clone a git repo inside it
Retrieve data from somewhere
Train an AI model on it
Produce a compiled model and store it elsewhere
Kill the VM so it doesn't cost me money
And I want to do all this in scripted mode, please
Let's ask! I don't know if we'll get through all of this in this article (or if we'll split it across several), but I really want to try the product with something I'd actually need to do and see where it takes me with this kind of task list.
To use Duet AI, you need to pay for a license, which can be monthly or annual. I'll try the monthly license:
... $22.80 for a one-month license is a bit steep, but maybe it's worth it. Let's continue. For what it's worth, the GitHub Copilot Business offering has comparable pricing.
In the GCP console (I'm staying there for now, not installing the VSCode plugin), there's a button in the top right to invoke the LLM, to the left of the Cloud Shell shortcut.
... a chat window pops up. Let's start with my project:
Here's my prompt:
I'd like to:
I want to do all this in a CI/CD pipeline
... and, at first glance, it looks pretty cool! I'm pasting Duet AI's very detailed response here; small caveat: it doesn't mention CI/CD pipelines, so that will be the subject of a follow-up prompt:
To deploy a VM with access to a GPU, you can use the following steps: Go to the Google Cloud console . Click the Compute Engine menu. Click the Create Instance button. In the Name field, enter a name for your instance. In the Region field, select a region that supports GPUs. In the Machine type field, select a machine type that includes a GPU. In the Boot disk section, click the Change button. In the Public images tab, search for "Ubuntu". Select the latest Ubuntu image. Click the Select button. In the Advanced options section, click the Networking tab. In the Network tags field, enter allow-ssh . Click the Create button.
Once your instance has been created, you can connect to it using SSH. To do this, you can use the following steps: Go to the Google Cloud console . Click the Compute Engine menu. Click the VM instances tab. Find your instance in the list and click the SSH button.
Once you have connected to your instance, you can clone a repository from GitHub. To do this, you can use the following steps: Open a terminal window. Type the following command: git clone https://github.com/GoogleCloudPlatform/cloud-samples-data.git Press Enter.
The repository has been cloned to your instance. You can now get some training data from the repository. To do this, you can use the following steps: Open a terminal window. Navigate to the cloud-samples-data/ai-platform/census directory. Type the following command: gsutil cp gs://cloud-samples-data/ai-platform/census/adult.data . Press Enter.
The training data has been downloaded to your instance. You can now train a regression or classification model based on this data. To do this, you can use the following steps: Open a terminal window. Navigate to the cloud-samples-data/ai-platform/census directory. Type the following command: python3 train.py Press Enter.
The model will be trained and saved to a file called model.joblib . You can now store the compiled model in a bucket. To do this, you can use the following steps: Go to the Google Cloud console . Click the Cloud Storage menu. Click the Create bucket button. In the Name field, enter a name for your bucket. In the Location field, select a location for your bucket. Click the Create button.
Once your bucket has been created, you can upload the compiled model to it. To do this, you can use the following steps: Go to the Google Cloud console . Click the Cloud Storage menu. Click the name of your bucket. Click the Upload files button. Select the model.joblib file. Click the Open button.
The compiled model has been uploaded to your bucket. You can now kill the VM. To do this, you can use the following steps: Go to the Google Cloud console . Click the Compute Engine menu. Click the VM instances tab. Find your instance in the list and click the Stop button.
The VM has been killed. I hope this helps!
Duet AI even gives me a sample project to test the full process! I carefully read through the steps, and everything looks relevant at first glance... but is the linked repo in the response actually valid?
There is indeed a GoogleCloudPlatform account on GitHub with repos containing training data, but here the LLM either hallucinated a repo that doesn't exist or linked to one that no longer exists.
What about the gsutil command gsutil cp gs://cloud-samples-data/ai-platform/census/adult.data . to download data from a bucket?
... same thing, the link is invalid.
Ok, let's continue our fact-checking: can I create a VM and clone a repo without any intermediate steps, as the response seems to suggest?
... yes, git is installed by default and I can clone repos.
Overall, the other steps provided in Duet AI's response are relevant, but this isn't "actionable" information since it's ultimately a detailed response repeating the sequence of steps I already had in mind from the start...
So I'm left wanting more. When I give the same prompt to GPT-4, I get detailed snippets and can boost my productivity by not having to spend too much time in the docs:
... and when I ask the same thing to Gemini, the LLM points me toward a better solution: Vertex AI ML pipelines.
Conclusion on the GCP Console side: if you use Duet AI, my colleagues were right! A beginner might actually lose more time using the tool than by reading the documentation themselves and trying to understand what they're doing.
At this point, the Duet AI product isn't satisfying after this quick test drive... I was expecting high-quality code generation, maybe even semi-assisted resource creation to get these tasks done and move faster. In the end, I could spend less time reading the (admittedly quite clear) GCP docs and/or prompting GPT-4/Gemini with more targeted prompts for a specific sub-step X or Y.
Now let's test the IDE integration (VSCode). Maybe we'll get a pleasant surprise?
I'll ask it to write the GitHub Actions pipeline code matching my previous prompt:
Write the GitHub actions pipeline to:
Here's what I got:
name: Deploy VM uses: GoogleCloudPlatform/cloud-sdk-deploy-action@v1 with: project_id: <walkthrough-project-id/> region: us-central1 machine_type: n1-standard-4 image_family: tensorflow-2.12.0-gpu image_project: tensorflow-cloud service_account: <walkthrough-project-number/>-compute@developer.gserviceaccount.com
name: Clone repo uses: actions/checkout@v3
name: Get training data run: | git clone https://github.com/GoogleCloudPlatform/cloud-samples-data.git cd cloud-samples-data/ai-platform/mnist gsutil cp gs://cloud-samples-data/ai-platform/mnist/train.csv .
name: Train model run: | python3 mnist_train.py
name: Store compiled model uses: GoogleCloudPlatform/cloud-sdk-deploy-action@v1 with: project_id: <walkthrough-project-id/> region: us-central1 location: gs://<walkthrough-project-id/>-bucket/ source: .
name: Kill VM uses: GoogleCloudPlatform/cloud-sdk-deploy-action@v1 with: project_id: <walkthrough-project-id/> region: us-central1 machine_type: n1-standard-4 image_family: tensorflow-2.12.0-gpu image_project: tensorflow-cloud service_account: <walkthrough-project-number/>-compute@developer.gserviceaccount.com delete: true
... the generation is super fast, but this is even worse: the mentioned GitHub Actions don't exist, the syntax is incorrect, and it's really just a vague sketch of what a pipeline might look like on this platform!
I've come to the conclusion that this product isn't mature yet, and it will certainly be worth revisiting later. I'm sticking with my GPT-4/Gemini duo, and I'll add this: using a single LLM, however deeply integrated it may be (like Copilot for the Microsoft suite), is not the future in my opinion. We'll always be limited by the inherent weaknesses of LLMs, even
if they diminish over time. In my view, the most promising solutions are those that make LLMs collaborate with each other (multi-agent solutions like LangChain), that have access to external tools, or that allow an LLM to introspect and self-correct (like self-corrective RAG)... These agentic applications will, in my opinion, yield far more impressive results than integrating a single LLM into a software suite or cloud platform.
Thanks for reading, and see you soon!
CTO de la scale-up LAMALO, Yacine est un développeur fullstack qui ne tient pas en place : JavaScript, Node.js, Python, LLM, voice UX... Toujours en veille, il transforme les dernières innovations en solutions concrètes !
LinkedInGet our best articles every month.
Le premier produit propre de Reboot Conseil. Une solution innovante née de la collaboration.
ProjectCréer une plateforme IA accessible sur web et mobile. Un projet combinant orchestration IA et mobilité.
ProjectRéduire le délai de conception bijoutière de 8 jours à 20 minutes grâce à l'IA générative et la modélisation 3D.
TrainingMaîtrisez les APIs, intégrez l'IA dans vos applications. Embeddings, fine-tuning, function calling.
ServiceFormateurs opérationnels. IA, data science, développement web. Certifié Qualiopi.
ArticlePère Castor, raconte-moi N8N N8N (prononcez « n-huit-n » ou « nodemation » si vous voulez faire classe). C'est un outil qui permet de connecter vos...