A Sweet Strategy: Selling Cakes in Wealthy Residential Areas !

Has everyone ever thought about starting a cake shop? As a cake lover myself, I often find myself wondering, "What kind of cake would be perfect?" However, developing a concrete business strategy is a real challenge. That's why this time, I'd like to conduct a case study with the support of an "AI marketing-agency." Let's get started.


1. Selling Cakes in an Upscale Kansai Neighborhood

The business scenario I've prepared for this case is a simple one:

Goal: To sell premium fruit cakes in the Kansai region.

  • Cake Features: Premium shortcakes featuring strawberries, peaches, and muscat grapes.

  • Target Audience: Women in their 20s to 40s living in upscale residential areas.

  • Stores: 3 cafes near Yamate Dentetsu Ashiya Station, 1 cafe near Kaigan Dentetsu Ashiya Station.

  • Direct Sales Outlet: 1 store inside the Yamate Dentetsu Ashiya Station premises.

  • Branding: The brand's primary color will be blue, with the website and logo also unified in blue.

  • Current Plan: In the process of planning a sales promotion for the autumn season.

From here, what kind of concrete business strategy can we derive? First, I'll input the business scenario into the AI marketing-agency.

The first thing it does is automatically generate 10 cool domain names.

It's hard to choose, but for now, I'll proceed with branding using "PremiumAshiyaCake.com".

 

2. A Practical Business Strategy

Now, let's ask the AI marketing-agency to formulate a business strategy for selling our premium fruit cakes in Kansai. When prompted to input the necessary information, I re-entered the business scenario, and the following business strategy was generated in about two minutes. Amazing!

It's a long document, over five pages, so I can't share it all, but here is the "Core of the Marketing Strategy."

  • Overall Approach: Direct Response that Inspires Aspiration

    • We will build an aspirational, luxury brand image through beautiful content, and then convert that desire into immediate store visits using precisely targeted calls-to-action (CTAs).

  • Core Message and Positioning:

    • Positioning Statement: For the discerning women of Kansai, Premium Ashiya Cake is the patisserie that transforms a moment into a cherished memory with its exquisitely crafted seasonal shortcakes.

    • Tagline / Core Message: "Premium Ashiya Cake: An exquisite moment, crafted for you."

  • Key Pillars of the Strategy:

    • Visual Elegance and a "Blue" Signature: All visuals must be of professional, magazine-quality. The brand color "blue" will be used as a sophisticated accent in styling—such as on blue ribbons, parts of the tableware, or as background elements—to create a recognizable and unique visual signature.

    • Hyper-local Exclusivity: Marketing efforts will be geographically and demographically laser-focused on the target audience residing in Ashiya and its surrounding affluent areas. This creates an "in-the-know" allure for locals.

    • Seasonal Storytelling: Treat each season's campaign as a major event. We will build a narrative around the star ingredients, such as Shine Muscat grapes from a specific partner farm, to build anticipation and justify the premium price point.

This is wonderfully practical content. The keywords I provided—"blue," "Ashiya," and "muscat"—have been skillfully integrated into the strategy.

 

3. The Logo is Excellent, Too—This is Usable!

Because I specified in the initial business scenario that I wanted to "unify the color scheme based on blue," it created this cool logo for me. It really looks like something I could use right away. Google's image generation AI, Imagen 3.0, is used here. The quality of this AI is always highly rated, so it's no surprise that the logo generated this time is also of outstanding quality.

 

So, what did you think of the AI marketing-agency? The business strategy is professional, and it's amazing how it automatically created the domain names and logo with such excellent results. Although I couldn't introduce it this time, it also includes a website creation feature. It's surprising that a tool this high-performance is actually available for free. A development kit called "Google ADK" is provided as open-source, and the AI marketing-agency from this article can be downloaded and used for free as Sample (1). For those who can use Python, I think you'll get the hang of it with a little practice. The operational costs are also limited to the usage fees for Google Gemini 2.5 Pro, so the cost-effectiveness is outstanding. I encourage you all to give it a try.

Please note that this story is a work of fiction and does not represent anything that actually exists. That's all for today, stay tuned!

 

You can enjoy our video news ToshiStats-AI from this link, too!

1) Marketing Agency, Google, May 2025



Copyright © 2025 Toshifumi Kuga. All right reserved

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

Unlocking Sales Forecasts: Can GPT-5 Reveal the Most Important Data?

Have you ever found yourself in marketing, wanting to predict sales and gathering a ton of data? For example, let's say you have sticker sales data (1) like the set below. The num_sold column represents the number of units sold. This is actually a large dataset with over 200,000 entries. So, among these data columns (which we call "features"), which one is the most important for predicting sales? They all seem important, and it's impossible to check all 200,000 records one by one. So, let's try asking the generative AI, GPT-5.

                         Sticker sales data

 

1. Asking GPT-5 with a Prompt

To identify the important features for a prediction, you first have to create a predictive model. This is a task that data scientists perform all the time. However, they usually create these models by coding in Python, which can be a high barrier for the average business person. So, isn't there an easier way? Yes, and this is where prompts come in handy. If you can give instructions to GPT-5 with a prompt, no coding is necessary. Here is the prompt I created for this task.

     data & prompt

Key points of the prompt:

  • Use HistGradientBoostingRegressor from sklearn.

  • Evaluate the error using mean_absolute_percentage_error.

  • Split the data into train-data and test-data at an 80:20 ratio.

  • Display the top 10 feature importances with their original variable names.

  • Print the results as numerical output.

By getting the top 10 feature importances, we can understand which data column is the most significant. I won't explain the predictive model itself this time, so for those who want to dive deeper, please refer to a machine learning textbook.

 

2. The Code Actually Being Executed

Based on the prompt above, GPT-5 generated the following Python code on its own. It might look complicated to non-specialists, but rest assured, we don't have to touch Python at all. However, we can review this code to see how the calculation is being done, so it's by no means a black box. I believe this transparency is very important when using GPT-5 in a business context.

                 GPT-5's code for building the prediction model

 

3. "Product" Was the Most Important!

Ultimately, we got the following result.

Feature Importance Ranking

A higher "importance" value in the table above means the feature is more significant. This analysis revealed that "product" was overwhelmingly important. It seems that thinking about "what is selling" is essential. This is followed by "store" and "country". This suggests that considering "in what kind of store" and "in which country" is also crucial.

                     feature importance ranking

 

So, what did you think? This time, we instructed GPT-5 with a prompt to calculate which features are most important for predicting sales. It's true that you might run into errors along the way that GPT-5 has to correct itself, so I felt that having some basic knowledge of machine learning is beneficial. However, we were able to get the result without the user having to write any Python, which means marketing professionals can start trying this out today. I hope you can use the method we introduced today in your own marketing work. That's all for now. Stay tuned!

 


You can enjoy our video news ToshiStats-AI from this link, too!


1)Forecasting Sticker Sales, kaggle, January 1,2025



Copyright © 2025 Toshifumi Kuga. All right reserved

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

How to Turn GPT-5 into a Pro Marketing Analyst with AI Agents!

A while back, I introduced a guide to prompting GPT-5, but it can be quite a challenge to write a perfect prompt from scratch. Not to worry! You can actually have GPT-5 write prompts for GPT-5. Pretty cool, right? Let's take a look at how.

 

1. Using GPT-5 to Do a Marketer's Job

I have some global sales data for stickers(1). Based on this data, I want to develop a sales strategy.

                 Global Sticker Sales Records

In a typical company, a data scientist would analyze the data, and a marketing manager would then create an action plan based on the results. We're going to see if we can get GPT-5 to handle this entire process. Of course, this requires a good prompt, but what kind of prompt is best? This is where it gets tricky. The principle I always adhere to is this: "Data analysis is a means, not an end." There are many data analysis methods, so the same data can be analyzed in various ways. However, what we really want is a sales strategy that boosts revenue. With this in mind, let's reconsider what makes a good prompt.

It's a bit of a puzzle, but I've managed to draft a preliminary version.

 

2. Using Metaprompting to Improve the Prompt with GPT-5

Now, let's have GPT-5 improve the prompt I quickly drafted. The image below shows the process. The first red box is my draft prompt.

                    Metaprompt

The second red box explicitly states the principle: "Perform data analysis with the goal of creating a Marketing strategy." When you provide the data and run this prompt, GPT-5 creates the improvement suggestions you see below, which are very detailed. I actually ran this process twice to get a better result.

                   Final Prompt

 

3. The Result: GPT-5 Generates MARKETING Strategy!

Running the final prompt took about a minute and produced the following output. The detailed analysis and resulting insights are directly connected to marketing actions, staying true to our initial principle. It's fantastic.

The output is concise and perfect for busy executives. Creating this content on my own would likely take an entire day, but with GPT-5, the whole process—including the time it took to draft the initial prompt by myself —takes only about 30 minutes. This really shows how powerful GPT-5 is.

 

What do you think? This time, we explored a method for getting GPT-5 to improve its own prompts. This technique is called Metaprompting, and it's described in the OpenAI GPT-5 Prompting Guide (2).

I encourage you to try Metaprompting starting today and take your AI agent to the next level. That's all for now! Stay tuned!

 



You can enjoy our video news ToshiStats-AI from this link, too!

 

Copyright © 2025 Toshifumi Kuga. All right reserved

1)Forecasting Sticker Sales, kaggle, January 1,2025

2) GPT-5 prompting_guide, OpenAI, August 7, 2025


Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

Let's Explore the Best Practices for Crafting GPT-5 Prompts!

We are already hearing from many in the field that with the arrival of GPT-5, "the writing style is different from GPT-4o and earlier" and "its performance as an agent is on another level." Here, we will build upon the key points from OpenAI's "GPT-5 Prompt Guide (1)" and organize, from a practical perspective, "how to write prompts to stably reproduce desired behaviors." The following three keywords are key:

  1. GPT-5 acts very proactively as an AI agent.

  2. Self-reflection and guiding principles.

  3. Instruction following with "surgical precision."

Let's delve into each of these.

 




 

1. GPT-5 acts very proactively as an AI agent.

GPT-5's enhanced capabilities in tool-calling, understanding long contexts, and planning allow it to proceed autonomously even with ambiguous tasks. Whether you "harness" or "suppress" this capability depends on how you design the agent's "eagerness").


1-1. Controlling Eagerness with Prompts

To suppress eagerness, intentionally limit the depth of exploration and explicitly set caps on parallel searches or additional tool calls. This is effective in situations where processing time and cost are priorities, or when requirements are clear and exploration needs to be minimized.

To enhance eagerness, explicitly state rules for persistence, such as "Do not end the turn until the problem is fully resolved" and "Even with uncertainty, proceed with the best possible plan." This is suitable for long-duration tasks where you want the agent to see them through to completion with minimal check-ins with the user.

Practical Snippet (To suppress eagerness):

<context_gathering>
Goal: Reach a conclusion quickly with minimal information gathering.
Method: A single-batch search, starting broad and then narrowing down. Avoid duplicate searches.
Budget: A maximum of 2 tool calls.
Escape: If a conclusion is reasonably certain, accept minor incompleteness to provide an early answer.
</context_gathering>

Practical Snippet (To encourage eagerness):

<persistence>
Do not end the turn until the problem is completely resolved.
Reason through uncertainty and continue with the best possible plan.
Minimize clarifying questions. Adopt reasonable assumptions and state them later.
</persistence>

1-2. Visualize with a "Tool Preamble"

When the agent outputs a long rollout during execution, having it first provide a brief summary—explaining the objective, outlining the plan, noting progress, and confirming completion—makes it easier for the user to follow along and creates a better user experience.

Recommended Snippet:

<tool_preambles>
First, restate the user's goal in a single sentence. Follow with a bulleted list of the planned steps.
During execution, add concise progress logs sequentially.
Finally, provide a summary that clearly distinguishes between the "Plan" and the "Actual Results."
</tool_preambles>
 
 

2. Self-reflection and Guiding Principles

GPT-5 excels at "internally refining" the quality of its output through self-reflection. However, if the criteria for judging quality are not established beforehand, this reflection can become unproductive. This is where guiding principles and a private rubric are effective.


2-1. Provide a "Self-Grading Scorecard" with a Private Rubric

For zero-to-one generation tasks (e.g., creating a new web app, drafting specifications), have the model internally create a scorecard with 5-7 evaluation criteria. Then, have it repeatedly rewrite and re-evaluate its output based on these criteria.

Rubric Generation Snippet:

<self_reflection>
Define the conditions that a world-class deliverable should meet across 5-7 categories (e.g., UI quality, readability, robustness, extensibility, accessibility, accountability). Score your own proposal against these criteria, identify shortcomings, and redesign. The rubric itself should not be shown to the user.
</self_reflection>

2-2. Reduce Inconsistency with Guiding Principles

For ongoing development or modifying existing code, first provide the project's conventions by clearly stating its design principles, directory structure, and UI standards. This ensures that the model's suggested improvements and changes integrate naturally with the existing culture.

Guiding Principles Snippet (Example):

<guiding_principles>
Clarity and Reusability: Keep components small and reusable. Group them and avoid duplication.
Consistency: Unify tokens, typography, and spacing.
Simplicity: Avoid unnecessary complexity in styling and logic.
</guiding_principles>

2-3. Separately Control Verbosity and Reasoning Effort

GPT-5 can control its verbosity (the length of the final answer) and its reasoning_effort (the depth of thought) independently. This allows for context-specific overrides, such as "be concise in prose, but provide detailed explanations in code." The guide introduces a practical example of prompt tuning by Cursor, which is worth checking out. A useful tip for fast mode (minimal reasoning) is to require a brief summary of its thinking or plan at the beginning to assist its process.

 
 


3. GPT-5's Instruction Following has "Surgical Precision"

GPT-5 is extremely sensitive to the accuracy and consistency of instructions. Contradictory requests or ambiguous prompts waste reasoning resources and degrade output quality. Therefore, it is crucial to "structure" your instruction hierarchy to prevent contradictions before they occur.



3-1. Design to Avoid Contradictions

Take the example of a healthcare administrator scheduling a patient appointment based on symptoms. "Exceptions," such as altering preceding steps only in emergencies, must be clearly stated so they do not conflict with standard procedures.

  • Bad Example: The instructions "Do not schedule without consent" and "First, automatically secure the fastest same-day slot" coexist.

  • Correct Example: When "Always check the profile" and "In an emergency, immediately direct to 911" coexist, the exception rule is declared first.

OpenAI offers the following warning:

We understand that the process of building prompts is an iterative one, and that many prompts are living documents, constantly being updated by different stakeholders. But that’s why it is even more important to thoroughly review for instructions that are phrased improperly. We have already seen multiple early users discover ambiguities and contradictions within their core prompt libraries when they did such a review. Removing them dramatically streamlined and improved GPT-5's performance. We encourage you to test your prompts with our Prompt Optimizer tool to identify these kinds of issues.

 
 

How was that? In this article, we explored key points for prompt design from OpenAI's GPT-5 Prompt Guide (1). GPT-5 is a "partner in practice," combining powerful autonomy with precise instruction following. Try incorporating the points discussed today into your prompts and take your AI agents to the next level. That's all for today. Stay tuned!

 
 

Copyright © 2025 Toshifumi Kuga. All right reserved

1) GPT-5 prompting_guide, OpenAI, August 7, 2025

You can enjoy our video news ToshiStats-AI from this link, too!

 

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

Unexpected Weakness Revealed! What Happened When I Tried Image Analysis with the New "GPT-5" Generative AI from OpenAI

OpenAI's New Generative AI "GPT-5" Has Arrived. I Tried Image Analysis and Discovered a Surprising Weakness! (1)

The long-awaited new generative AI, "GPT-5," has been released by OpenAI. I believe its multimodal capabilities have also improved, so I decided to upload a few images and run some simple tests. Let's get started.

 

1.The car is stopped, but why is it stopped?

The image shows a Mazda passenger car on display inside a train station (Hiroshima Station). This is just an exhibit car, but I thought GPT-5 could answer if it understood the background. It seems to have correctly recognized that this is an indoor space and not a public road. The answer was correct.

 

2.How many minutes until departure?

This is a common scenario when traveling. I asked how many minutes until the train I was planning to board, "Nozomi 104," would depart. The key was whether GPT-5 could understand that the large displayed time was the current time. This time, it also worked out well.

 

3.Which way should I go for car number 4?

This is another common travel situation. At a Shinkansen platform at Tokyo Station, I wanted to go to car number 4, and I asked which way to go, left or right, based on the sign above. The result was correct.

 

4. I want to go to Shin-Osaka Station. How many trains can I take?

The last one is a difficult question. This is a Shinkansen information board at Tokyo Station, and it shows 16 trains in total. When I asked, "I want to go to Shin-Osaka Station," it replied with 8 trains. This is the number of trains with Shin-Osaka as the destination, which is a bit of a simplistic answer. For example, a Shinkansen bound for Hakata also stops at Shin-Osaka. It seems that GPT-5, in its default mode, didn't think that far ahead.

To redeem itself, I switched to "Thinking" mode and tried one more time. As expected, it considered the intermediate stops and answered 14 trains, excluding the trains bound for Nagoya. That's the correct answer.

 

So, what do you think? Overall, the performance is excellent. GPT-5 is said to use a "real-time router" that defaults to "Auto" and automatically switches to "Thinking" for difficult tasks. However, since it's just been released, this switching might not always work perfectly. As the examples above show, although "Thinking" mode was appropriate in some cases, it didn't activate automatically. Therefore, if you feel something is "a little off," I recommend switching to "Thinking" mode. I hope it will become more stable over time. I look forward to covering GPT-5 again in the future. Stay tuned!





Copyright © 2025 Toshifumi Kuga. All right reserved

1) GPT-5 System Card., OpenAI, August 7, 2025


Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.




Prompt Optimization: The Secret to Building Better AI Agents?

The instructions that humans write for generative AI are called "prompts." There are many books and blogs out there that offer guidance on how to write them. Many of you have probably tried, and it's surprisingly difficult, isn't it? While no programming language is required, you have to go through a lot of trial and error to get the output you want from a generative AI. This process can be quite time-consuming, isn't well-systematized, and you often have to start from scratch for each new task.

So, this time, we'd like to experiment with "what happens if we have a generative AI write the prompts for us?" Let's get started.

 


1. Prompt Optimization

In 2023, Google DeepMind released a research paper titled "LARGE LANGUAGE MODELS AS OPTIMIZERS"(1).

This paper explored the use of LLMs to optimize prompts, and it seems to have worked well for several tasks. While a human writes the initial prompt, subsequent improvements are delegated to the LLM (the optimizer). The LLM is also responsible for judging whether the result was successful or not (the evaluator), meaning this approach can be applied even without labeled data that provides the correct answers. This is very helpful, as tasks involving generative AI often lack labeled data. Below is a flowchart of this process, which is effectively the automation of prompt engineering. This is professionally referred to as "prompt optimization." The specific method we adopted for this experiment is called OPRO (Optimization by PROmpting).






2. Experiment with a Customer Complaint Classification Task

Similar to our blog post on July 26th, we set up a task to predict which financial product a bank's customer complaint is about. We used an LLM to solve a classification task where it selects one of the following six financial products. We used gemini-2.5-flash for this experiment, with a sample size of 100 customer complaints.

  • Mortgage

  • Checking or savings account

  • Student loan

  • Money transfer, virtual currency, or money service

  • Bank account or service

  • Consumer Loan

In this experiment, the LLM handled the prompt generation, but a meta-prompt was necessary to further improve the resulting prompts. I wrote the meta-prompt as follows. Essentially, it tells the LLM to "please further improve the resulting prompt."

We had the LLM generate 20 prompts, and the results are shown below. The final number is the accuracy. An accuracy of 0.8 means 80 out of 100 cases were correct. Since this data came with labeled data, calculating the accuracy was easy.

We adopted the second prompt from the list, which had the best accuracy of 0.89 in this experiment. When we ported this prompt to our regular experimental environment and ran it, the accuracy exceeded 0.9, as shown below. We've done this task many times before, but this is the first time we've surpassed 0.9 accuracy. That's amazing!

 






3. What Does the Future of Prompt Engineering Look Like?

As you can see, it seems possible to optimize prompts by leveraging the power of generative AI. Of course, when considering cost and time, the results might not always be worth the effort. Nevertheless, I feel there's a strong need for prompt automation. Researchers worldwide are currently exploring various methods, so many things that aren't possible now will likely become possible in the near future. Prompt engineering techniques will continue to evolve, and I'm looking forward to these technological developments and plan to try out various methods myself.

 

So, what did you think? The ability of an AI agent to fully utilize the power of generative AI and improve itself without human intervention is called "Recursive-self-improvement." At ToshiStats, we will continue to provide the latest updates on this topic. Please look forward to it. Stay tuned!

 

Copyright © 2025 Toshifumi Kuga. All right reserved

1) LARGE LANGUAGE MODELS AS OPTIMIZERS Chengrun Yang Xuezhi Wang Yifeng Lu Hanxiao Liu Quoc V. Le Denny Zhou Xinyun Chen , Google DeepMind

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

Back to object detection after a break! Generative AI shows no signs of slowing down

It's remarkable to see the rapid progress of generative AI. Recently, the improvement in multimodal capabilities, which process information like images and videos in addition to natural language, has been outstanding. This is sometimes referred to as AI's "spatial understanding." Let's briefly experiment with what kind of information generative AI can extract from images to check the performance of the current Gemini 2.5-flash model.



1. Google AI Studio

I'll be using the familiar generative AI development platform, Google AI Studio (1), again. I've prepared a no-code app for spatial understanding. It can display the number of identified objects and their coordinates. For example, for "hands," it shows them like this. It accurately identifies two hands.

 

2. Generative AI Understands the Meaning of Words and Can Identify Objects

So, what about a task that requires understanding the positional relationship between a flower and a hand, such as "a hand holding a flower"? The result is a successful identification.

Conversely, what about a task like "a hand not holding a flower"? The result is also a successful identification. This is impressive; it identified it with no problem.

Next, can it identify an object based solely on its positional relationship? Let's ask it to identify "what's on the hamburg." It easily answered "fried egg." While this generative AI, Gemini, has been touted for its high-performance image processing since its debut in December 2023, I'm honestly surprised it can do this much.

 

3. Can It Identify Station Names from a Sign?

Let's try a slightly more difficult task. This is a section of a subway station sign in Kuala Lumpur, the capital of Malaysia. Let's see if it can identify the three stations between Ampang Park and Chan Sow Lin from this image of the sign.

The result was that it accurately identified the three stations. This is a task that requires it to not only read the text in the image correctly but also understand the positional relationship of the stations. It accomplished this without any difficulty. I have nothing more to say; it's amazing!

 

What do you think? I'm sure many of you are surprised by the high level of spatial understanding. Generative AI is still in its early stages, so its performance will continue to improve, and accordingly, its practical applications will expand. It's something to look forward to. Also, I created this AI app on Google AI Studio without writing any code. Google AI Studio is very user-friendly and high-performing. I encourage you all to try it. Toshi Stats will continue to challenge itself to build various AI apps. Please stay tuned!

 
 

Copyright © 2025 Toshifumi Kuga. All right reserved

1) Google AI Studio

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

I tried creating and implementing an AI app with no-code on Google AI Studio, and it was amazing!

Google has been rapidly releasing generative AI and related products recently, with Google AI Studio (1) particularly standing out as a developer platform. It integrates the latest image and video generation AI, truly embodying a multimodal platform. What's more, it's free up to a certain limit, making it a powerful ally for startups like ours. So, let's actually create an AI application with this platform!


1. Google AI Studio Portal

Below is the Google AI Studio portal. It has so many features that an AI beginner might get confused without prior knowledge. I suppose that's why it's a developer-oriented platform. By clicking the button in the red box, you'll be taken to a site where you can create an application simply by writing a prompt.

Google AI Studio

Here's the prompt I used this time.

"As a 'Complaint Categorization Agent,' you are an expert at understanding which product a customer is complaining about. You can select only one product from the complaint. Comprehensively analyze the provided complaint and classify it into one of the following categories:

  • Mortgage

  • Checking or savings account

  • Student loan

  • Money transfer, virtual currency, or money service

  • Bank account or service

  • Consumer Loan

Your output should be only one of the above categories. All samples must be classified into one of these classes. Results for all samples are required. Create a GUI that adds the ability to input a CSV file of customer complaints and generate a graph showing the distribution of customer complaint classes. Add features to the GUI to add labeled data independently of the customer complaint CSV file, calculate and display accuracy, and display a confusion matrix of the results."

Just by typing this prompt into the box and running it, the application described below is created. I didn't use any coding like Python at all. It's amazing!



2. Tackling a Real Classification Task with the Created App

After two or three attempts, the final application I built is shown below. It handles the task of classifying bank customer complaints by financial product. This time, I've set it to six types of financial products, but generative AI can achieve high accuracy even without prior training, so it's possible to classify many more classes if desired.

Input Screen

We import customer complaints via a CSV file. This time, I'll use 100 complaints. Furthermore, if ground truth data is available, I've added functionality to output accuracy and a confusion matrix. Below are the actual classification results. The distribution of the six financial products is displayed. It seems this customer complaint data primarily concerns mortgages.

Class Distribution

Here's the crucial classification accuracy. This time, we achieved over 80% accuracy, at 83%, without any prior training. It's incredible!

Classification accuracy

The confusion matrix, often used in classification tasks, can also be displayed. This not only provides a numerical accuracy but also shows where classification errors frequently occur, making it easier to set guidelines for improving accuracy and enabling more effective improvements.

Confusion Matrix

 

3. Agent Evaluation

What I realized when creating this app was that if some evaluation metric is available, the quality of discussions for subsequent improvements deepens. Trying with just a few samples won't give a good grasp of the generative AI's behavior. Ideally, preparing at least 10, and ideally 100 or more, samples with corresponding ground truth data, and having the AI app output evaluation metrics, would enable effective accuracy improvement suggestions. This theme is called "Agent evaluation," and I believe it will become essential for building practical AI applications in the future.

 

What do you think? Despite not doing any programming at all this time, I was able to create such an amazing AI application. Google AI Studio integrates perfectly with Google Cloud, allowing you to deploy your app to the cloud with a single button and use it worldwide. Toshi Stats will continue to challenge ourselves by building various AI applications. Stay tuned!

 

Copyright © 2025 Toshifumi Kuga. All right reserved

1) Google AI Studio

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

Prompt Engineering Mastery: The Fast Track

Since the debut of ChatGPT at the end of November 2022, the way we give instructions to computers has completely changed. Previously, programming languages like Python were necessary, but with ChatGPT, it's now possible to give instructions using the "natural languages" we use every day, such as English and Japanese. These natural language instructions are called "prompts." It has been about two and a half years since prompts came into use, and many people are likely experimenting with various prompts daily. As this is a new technology, systematically learning it can be challenging. However, Google has released a free white paper (1) of over 60 pages on the topic, so let's explore it for some hints. Let's begin!

 

1. Grasping the Basic Concepts

We often see simple prompt guides like "The Top 20 Prompts You Need to Know." However, it's impossible to effectively interact with a generative AI, which holds a vast amount of knowledge, with just about 20 prompts. While it may seem like a shortcut, memorizing a recommended list of 20 prompts each time is laborious and inefficient. Various studies are being conducted on how to write prompts, and the theoretical background is being investigated. While it's difficult for the average person to grasp everything, Google's white paper summarizes it concisely as follows:

  • Zero-shot prompting

  • Few-shot prompting

  • System prompting

  • Role prompting

  • Contextual prompting

  • Step-back prompting

  • Chain of thought

  • Self-consistency

  • Tree of thoughts

For example, the second method, "Few-shot prompting," is a technique to elicit more accurate answers from a generative AI by providing it with specific examples in "question and answer pairs." The other methods also have their own theoretical backgrounds and wide ranges of application. Rather than rote memorization, it's important to first understand the concepts and then apply them. I cannot explain them all here, so I encourage you to read the original document. I recommend taking your time to learn them one by one.

 

2. Memorize Useful Words

That said, taking the first step to actually write a prompt can be quite daunting. Google has provided a list of recommended verbs, which I'd like to introduce here. Choosing from these verbs to craft your prompts might help you create good ones, so it's worth a try.

Act, Analyze, Categorize, Classify, Contrast, Compare, Create, Describe, Define, Evaluate, Extract, Find, Generate, Identify, List, Measure, Organize, Parse (especially for sentences and data grammatically), Pick, Predict, Provide, Rank, Recommend, Return, Retrieve (information, etc.), Rewrite, Select, Show, Sort, Summarize, Translate, Write

When you're unsure what to write, these verbs might give you a hint. This list includes many that I frequently use myself.

 

3. Finding Hints from Actual Examples

When you actually try out prompts, you'll find that some cases work well while others don't. The white paper summarizes these into 15 Best Practices. Here, I'll introduce an example from page 56.

Be specific about the output

Be specific about the desired output. A concise instruction might not guide the LLM enough

or could be too generic. Providing specific details in the prompt (through system or context

prompting) can help the model to focus on what’s relevant, improving the overall accuracy.

Examples:

DO:

Generate a 3 paragraph blog post about the top 5 video game consoles.

The blog post should be informative and engaging, and it should be

written in a conversational style.

DO NOT:

Generate a blog post about video game consoles.

Indeed, we tend to write simple prompts like the bad example. However, if we can add a bit more information and write like the good example, the information we receive will be better tailored to our needs. Just knowing this can change how you write prompts from now on. This white paper is full of such examples, so I highly recommend you read it for yourself.

 

How was that? I hope this serves as a reference for your prompt learning journey. Prompt engineering is still in its infancy, making it a great time to start learning. Let's conclude with a message from Google: "You don’t need to be a data scientist or a machine learning engineer – everyone can write a prompt. (1)"

Stay tuned!




Copyright © 2025 Toshifumi Kuga. All right reserved

1) , "Prompt Engineering”, Google, Feb 2025

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

How can we achieve best practices for constructing multi-agent AI systems?

Lately, I've been hearing a lot about multi-agent AI systems. As someone who is always thinking about not just using these services but building them myself, I've been keen to know how to construct high-performance AI agents. Last week, Anthropic published an article titled, "How we built our multi-agent research system(1)," which describes their construction method in detail. So today, using this article as a reference, I'd like to explore the best practices for creating multi-agent AI systems with all of you. Let's get started!

 



1. Why do we need so many agents?

ChatGPT, which debuted at the end of November 2022, was a single model. Since then, several services using generative AI have appeared, but initially, most of them used a single AI. So why have we recently seen a rise in methods that connect multiple generative AIs to operate as a single system? I believe it's because it has become clear that there are limits to what a single generative AI can accomplish when faced with complex tasks. It has gradually become apparent that by connecting and integrating several agents, even complex tasks can be handled. This trend has become particularly noticeable in conjunction with the performance improvements of standalone generative AI models like Gemini 1.5 Pro and OpenAI's o3.

 

2. What kind of agent structure should we build?

The Anthropic article included a wonderful chart that I'd love to reference. The key lies with the "Lead agent" and the "sub-agents" placed beneath it.

Here is Anthropic's explanation: "The multi-agent architecture in action: user queries flow through a lead agent that creates specialized subagents to search for different aspects in parallel" . While the chart shows three sub-agents, it's a matter of course that more may be needed to handle more complex tasks.

 

3. How do you coordinate many agents?

I've described the move to multi-agent AI as if it's all upside, but it requires numerous AI agents to function as expected. Getting a desired response from a single generative AI can be quite a challenge, so is it even possible to control multiple, simultaneously operating AI agents to meet our expectations? The key seems to lie in the "prompt." In fact, the Anthropic article contains countless, very helpful methods for prompt creation. Here, I'd like to introduce two representative examples. For the rest, I highly recommend reading the original article for yourself.

"Teach the orchestrator how to delegate. In our system, the lead agent decomposes queries into subtasks and describes them to subagents. Each subagent needs an objective, an output format, guidance on the tools and sources to use, and clear task boundaries. Without detailed task descriptions, agents duplicate work, leave gaps, or fail to find necessary information.

"Guide the thinking process. Extended thinking mode, which leads Claude to output additional tokens in a visible thinking process, can serve as a controllable scratchpad. The lead agent uses thinking to plan its approach, assessing which tools fit the task, determining query complexity and subagent count, and defining each subagent’s role.

In a nutshell, I think it comes down to "describing things meticulously." Apparently, simple and short instructions like "Research the semiconductor shortage" did not work well, so it seems necessary to write prompts for multi-agent AI as meticulously as possible. I'm going to work on writing better prompts from now on.

 

What did you think? It appears that various techniques are necessary to make multi-agent AI systems operate as intended. As the performance of generative AI improves in the future, the required orchestration techniques will also change. I want to continue to stay updated and incorporate the latest cutting-edge technologies. That's all for today. Stay tuned!



Toshi Stats Co., Ltd. provides a wide range of AI-related services. Please see here for more details!

Copyright © 2025 Toshifumi Kuga. All right reserved

1) ,  "How we built our multi-agent research system”,   Anthropic,  June 13, 2025









Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

The Cutting Edge of Prompt Engineering: A Look at Silicon Valley Startup

Hello everyone. How often do you find yourselves writing prompts? I imagine more and more of you are writing them daily and conversing with generative AI. So today, we're going to look at the state of cutting-edge prompt engineering, using a case study from a Silicon Valley startup. Let's get started.

 

1. "Parahelp," a Customer Support AI Startup

There's a startup in Silicon Valley called "Parahelp" that provides AI-powered customer support. Impressively, they have publicly shared some of their internally developed prompt know-how (1). In the hyper-competitive world of AI startups, I want to thank the Parahelp management team for generously sharing their valuable knowledge to help those who come after them. The details are in the link below for you to review, but my key takeaway from their know-how is this: "The time spent writing the prompt itself isn't long, but what's crucial is dedicating time to the continuous process of executing, evaluating, and improving that prompt."

When we write prompts in a chat, we often want an immediate answer and tend to aim for "100% quality on the first try." However, it seems the style in cutting-edge prompt engineering is to meticulously refine a prompt through numerous revisions. For an AI startup to earn its clients' trust, this expertise is essential and may very well be the source of its competitive advantage. I believe "iteration" is the key for prompts as well.

 

2. Prompts That Look Like a Computer Program

Let's take a look at a portion of the published prompt. This is a prompt for an AI agent to behave as a manager, and even this is only about half of the full version.

structures of prompts

Here is my analysis of the prompt above:

  • Assigning a persona (in this case, the role of a manager)

  • Describing tasks clearly and specifically

  • Listing detailed, numbered instructions

  • Providing important points as context

  • Defining the output format

I felt it adheres to the fundamental structure of a good prompt. Perhaps because it has been forged in the fierce competition of Silicon Valley, it is written with incredible precision. There's still more to it, so if you're interested, please view it from the link. It's written in even finer detail, and with its heavy use of XML tags, you could almost mistake it for a computer program. Incredible!

 

3. The Future of Prompt Engineering

I imagine that committing this much time and cost to prompt engineering is a high hurdle for the average business person. After learning the basics of prompt writing, many people struggle with what the next step should be.

One tip is to take a prompt you've written and feed it back to the generative AI with the task, "Please improve this prompt." This is called a "meta-prompt." Of course, the challenges of how to give instructions and how to evaluate the results still remain. At Toshi Stats, we plan to explore meta-prompts further.

 

So, what did you think? Even the simple term "prompt" has a lot of depth, doesn't it?As generative AI continues to evolve, or as methods for creating multi-AI agents advance, I believe prompt engineering itself will also continue to evolve. It's definitely something to keep an eye on. I plan to provide an update on this topic in the near future.

That's all for today. Stay tuned!

 

ToshiStats Co., Ltd. offers various AI-related services. Please check them out here!

 

Copyright © 2025 Toshifumi Kuga. All rights reserved.

  1. Prompt design at Parahelp, Parahelp, May 28, 2025

 






Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.





What Will White-Collar Jobs Be Like in 2030? What Should We Do Now?

As many of you may know, Dario Amodei has issued a warning to people. Roughly speaking, he stated, "The demand for entry-level jobs, such as those performed by new graduates, will be cut in half. This will become a reality within the next one to five years." This is shocking news, and the fact that it came from the CEO of a company actually developing generative AI has made it a global topic of discussion. In this article, I would like to delve deeper into this matter.

 

1. Dario Amodei's Warning

He is the co-founder and CEO of Anthropic, a U.S. company developing generative AI. He holds a Ph.D. in Physics from Princeton University, and from what I've seen, he strikes me more as a researcher than a business executive. I've been following his statements for the past two years, and I remember them being relatively conservative. I thought they were consistent with his researcher-like nature. However, this time he stated, "We are not keeping up with the pace of AI evolution," and "Unemployment rates will be 10% to 20%" (1), which shocked the world. I don't recall similar warnings from other frontier model development companies like OpenAI or Google DeepMind. This is why his latest statement garnered so much attention.

 

2. Current Performance of Generative AI

Currently, generative AI indeed possesses sufficient ability to handle entry-level tasks. As I mentioned before, Google Gemma 3, an open-source generative AI, achieved an accuracy of around 80% without any specific tuning for a 6-class classification task of bank customer complaints. Typically, relatively simple tasks like "Which product does this complaint relate to?" are assigned to new employees, and they learn the ropes through these assignments. However, with generative AI's performance reaching this level, management will undoubtedly lose the incentive to assign tasks to new employees at a cost. It's not yet clear whether the impact will be as significant as half of entry-level jobs disappearing, but given that even free generative AI can achieve around 80% accuracy today, a considerable impact is inevitable.

 

3. So, What Should We Do?

There is a division of opinion among experts regarding when AGI (Artificial General Intelligence), with capabilities equivalent to human experts, will appear. The most common estimate seems to be around 2030, but honestly, it's not clear. If so, we have about five years. In any case, we need to adapt our skills to the advent of AGI. Past computers could not be instructed or managed without a computer language. However, with the emergence of ChatGPT in November 2022, generative AI can now be instructed using natural language—"prompts." However, prompting is not a simple matter. It's an extremely delicate process of finely controlling the behavior of generative AI to precisely fit one's needs. Therefore, it's not uncommon to write prompts exceeding 20 to 30 lines. While I cannot delve into the detailed techniques here, it is certainly a skill that requires logical prompt writing. Even though prompts can be written in English or Japanese, acquiring this skill requires time and individual training. Given that open-source and free generative AIs are rapidly improving in performance, it is imperative for us, as users, to learn "prompting," the method of controlling them, regardless of our position or industry.

 

What do you think? It's good that Dario Amodei's warning has sparked more active discussion. As I mentioned in my previous blog post, generative AI is on the verge of implementing recursive self-improvement, gaining the ability for computers to improve themselves. The evolution of generative AI will accelerate further in the future. I believe the time has come to thoroughly learn prompting and prepare for the emergence of AGI. Discussions about AI and employment will continue globally. ToshiStats will keep you updated. Stay tuned!

 
 

ToshiStats Co., Ltd. offers various AI-related services. Please check them out here!



Copyright © 2025 Toshifumi Kuga. All right reserved

1) AI company's CEO issues warning about mass unemployment, CNN, May 30, 2025

 

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.








Google DeepMind Announces "AlphaEvolve," Hinting at an Intelligence Explosion!

Google DeepMind has unveiled a new research paper today, introducing "AlphaEvolve" (1), a coding agent that leverages evolutionary computation. It's already garnering significant attention due to its broad applicability and proven successes, such as discovering more efficient methods for matrix calculations in mathematics and improving efficiency in Google's data centers. Let's dive a little deeper into what makes it so remarkable.

 

LLMs Empowered with Evolutionary Computation

In a nutshell, "AlphaEvolve" can be described as an "agent that leverages LLMs to the fullest to evolve code." To briefly touch upon "evolutionary computation," it's an algorithm that mimics the process of evolution in humans and living organisms to improve systems, replicating genetic crossover and mutation on a computer. Traditionally, the function responsible for this, called an "Operator," had to be set by humans. "AlphaEvolve" automates the creation of Operators with the support of LLMs, enabling more efficient code generation. That sounds incredibly powerful! While evolutionary computation itself isn't new, with practical applications dating back to the 2000s, its combination with LLMs appears to have unlocked new capabilities. The red box in the diagram below indicates where evolutionary computation is applied.

 

2. Continued Evolution with Meta-Prompts

I'm particularly intrigued by the "prompt_sampler" mentioned above because this is where "meta-prompts" are executed. The paper explains, "Meta prompt evolution: instructions and context suggested by the LLM itself in an additional prompt-generation step, co-evolved in a separate database analogous to the solution programs." It seems that prompts are also evolving! The diagram below also shows that accuracy decreases when meta-prompt evolution is not applied compared to when it is.

This is incredible! With an algorithm like this, I'd certainly want to apply it to my own tasks.

 

3. Have We Taken a Step Closer to an Intelligence Explosion?

Approximately a year ago, researcher Leopold Aschenbrenner published a paper (2) predicting that computers would surpass human performance by 2030 as a result of an intelligence explosion. The graph below illustrates this projection. This latest "AlphaEvolve" can be seen as having acquired the ability to improve its own performance. This might just be a step closer to an intelligence explosion. It's hard to imagine the outcome of countless AI agents like this, each evolving independently, but it certainly feels like something monumental is on the horizon. After all, computers operate 24 hours a day, 365 days a year, so once they acquire self-improvement capabilities, their pace of evolution is likely to accelerate. He refers to this as "recursive self-improvement" (p47).

 



What are your thoughts? The idea of AI surpassing humans can be a bit challenging to grasp intuitively, but just thinking about what AI agents might be like around 2027 is incredibly exciting. I'll be sure to provide updates if a sequel to "AlphaEvolve" is released in the future. That's all for now. Stay tuned!

 


1) AlphaEvolve: A coding agent for scientific and algorithmic discovery Alexander Novikov* , Ngân Vu˜ * , Marvin Eisenberger* , Emilien Dupont* , Po-Sen Huang* , Adam Zsolt Wagner* , Sergey Shirobokov* , Borislav Kozlovskii* , Francisco J. R. Ruiz, Abbas Mehrabian, M. Pawan Kumar, Abigail See, Swarat Chaudhuri, George Holland, Alex Davies, Sebastian Nowozin, Pushmeet Kohli and Matej Balog* Google DeepMind ,16 May, 2025

2) S I T U AT I O N A L AWA R E N E S S  The Decade Ahead, Leopold Aschenbrenner, June 2024


 


Copyright © 2025 Toshifumi Kuga. All right reserved

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes, the software and the contents.

We Built a Customer Complaint Classification Agent with Google's New AI Agent Framework "ADK"

On April 9th, Google released a new AI agent framework called "ADK" (Agent Development Kit). It's an excellent framework that incorporates the latest multi-agent technology while also being user-friendly, allowing implementation in about 100 lines of code. At Toshi Stats, we decided to immediately try creating a customer complaint classification agent using ADK.

 

1. Customer Complaint Classification Task

Banks receive various complaints from customers. We want to classify these complaints based on which financial product they concern. Specifically, this is a 6-class classification task where we choose one from the following six financial products. Random guessing would yield an accuracy below 20%.

Financial products to classify

 

2. Implementation with ADK

Now, let's move on to the ADK implementation. We'll defer to the official documentation for file structure and other details, and instead show how to write the AI agent below. The "instruction" part is particularly important; writing this carefully improves accuracy. This is what's known as a "prompt". In this case, we've specifically instructed it to select only one from the six financial products. Other parts are largely unchanged from what's described in tutorials, etc. It has a simple structure, and I believe it's not difficult once you get used to it.

AI agent implementation with ADK

 

3. Accuracy Verification

We created six classification examples and had the AI agent provide answers. In the first example, I believe it answered "student loan" based on the word "graduation." It's quite smart! Also, in the second example, it's presumed to have answered "mortgage " based on the phrase "prime location." ADK has a built-in UI like the one shown below, which is very convenient for testing immediately after implementation.

ADK user interface

The generative AI model used this time, Google's "gemini-2.5-flash-04-17," is highly capable. When tasked with a 6-class classification problem using 100 actual customer complaints received by a bank, it typically achieves an accuracy of over 80%. For simple examples like the ones above, it wouldn't be surprising if it achieved 100% accuracy.

 

So, what did you think? This was our first time covering ADK, but I feel it will become popular due to its high performance and ease of use. Combined with A2A(2), which was announced by Google around the same time, I believe use cases will continue to increase. We're excited to see what comes next! At Toshi Stats, we will continue to build even more advanced AI agents with ADK. Stay tuned!

 



1) Agent Development Kit,  Google, April 9th, 2025
2) Agent2Agent.  Google, April 9th, 2025

 



Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

Running Google's Generative AI 'Gemma 3' on a MacBook Air M4 is Impressive!

Gemma 3 (1) has been released by Google. While open-source generative AI seemed to be somewhat lagging behind Chinese competitors, it looks like a model capable of competing has finally arrived. Of course, its performance is excellent, but its efficiency, allowing implementation even on a single GPU, is also a key appeal. So, this time, we got our hands on the latest M4 chip-equipped MacBook Air 13 (10-core GPU, 24GB unified memory, 512GB storage) to actually run it locally and check its accuracy and computation speed. Let's get started right away.

 

1. Data Used in the Experiment

Customer complaints submitted to US banks are publicly available (2). We prepared 10,000 of these data points and had Gemma 3 predict, "What specific financial product is this complaint about?". Specifically, this is a 6-class classification task, choosing one from the following six financial products. The numbers listed above in the image description are used as the labels.

 

2. Hardware and Software Used

We prepared the latest model of the MacBook Air 13. To implement Gemma 3 locally, we used Ollama (3). This software is often used for implementing generative AI on PCs; it lacks a UI, but is consequently lightweight and easy to use. Additionally, to enable easy swapping of the generative AI with different models in the future, we built the classification process using LangChain (4). The generative AI model used this time was Gemma3-12B-it, downloaded via Ollama.

 

3. Confusion Matrix Showing Results

We ran the classification on 10,000 samples. Although the model was used straight out-of-the-box without fine-tuning, it achieved a sufficient accuracy of 0.7558. Despite the considerable sample size, the computation time was about 14 hours, manageable within a day. The latest M4 chip truly is powerful. Looking at the confusion matrix, it seems distinguishing between "Bank account or service" and "Checking or savings account" was challenging.

 

Conclusion

So, what did you think? While I've tried various generative AIs in the past, this was my first time experimenting with 10,000 samples. The classification accuracy was good, and above all, not having to worry about costs is one of the advantages of running generative AI locally. Also, while the analysis data used this time is public, some tasks involve confidential information that cannot be uploaded to the cloud. In such cases, the analysis method presented here becomes a valid solution. I highly encourage everyone to give it a try. We plan to conduct more experiments using various generative AIs, so please look forward to them. Stay tuned!




















1) gemma3 https://blog.google/technology/developers/gemma-3/

2) Consumer Complaint Database https://www.consumerfinance.gov/data-research/consumer-complaints/

3) Ollama https://ollama.com/

4) LangChain https://www.langchain.com/

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.





DeepSeek-R1's Impact and the Future of Generative AI

Hello, DeepSeek-R1, released on January 20th (1), has sparked excitement among AI professionals and investors worldwide. I believe it's had an impact comparable to that of ChatGPT's emergence. Here, I'd like to consider why it has garnered so much global attention.

 

1. What Was New?

DeepSeek-R1's performance is remarkable. It stands shoulder-to-shoulder with OpenAI's o1 model, a veteran in inference models. Below is a comparison of performance across various benchmarks, where DeepSeek-R1 rivals the o1 model. The fact that a newcomer model suddenly matched OpenAI, the frontrunner in generative AI, is undoubtedly why the world is so astonished.

Performance comparison across various benchmarks


While DeepSeek-R1 appeared suddenly like a comet, there were several technical breakthroughs. Among the most significant is a training method called "GRPO." DeepSeek-R1 uses reinforcement learning to acquire advanced reasoning abilities in mathematics and coding. This is similar to existing generative AI. Reinforcement learning is a powerful training technique that doesn't require so-called "correct answer data," but it's a complex and resource-intensive approach. DeepSeek adopted a method that requires only one model instead of the usual two. This is "GRPO." Here is an overview, with PPO in the upper section representing a common technique used in existing models, and GRPO in the lower section being a new method.

PPO vs GRPO

In comparison, GRPO lacks the Value model present in PPO, and has only a Policy model. This means that only one model is needed instead of two. Since the model here refers to a massive generative AI, being able to complete training with only one model has a massive impact on resource saving. The fact that DeepSeek-R1, developed by a Chinese company unable to use the latest GPUs due to US semiconductor export restrictions, achieved such remarkable results might be related to this. For more details on the technical aspects, please refer to the research paper (2). The research paper (3) first introduced GRPO.

 

2. Why Did It Attract Global Attention?

DeepSeek-R1 was released as an open-weight model, available for anyone to download and use. Additionally, the entire training method, including GRPO, was published in detail in research papers. Until now, most generative AI models, with a few exceptions, could only be accessed via APIs and not downloaded. Furthermore, how they were trained was rarely disclosed, making them black boxes. In this context, the release of DeepSeek-R1, a cutting-edge model, in a usable form for AI researchers worldwide had a profound impact. Even if a model is called amazing, if the inner workings are unknown, neither criticism nor improvement suggestions can be made. With DeepSeek-R1, I feel that the open-source community can, for the first time, participate in the development of the most advanced generative AI models.

 

3. What Will Become of Generative AI in the Future?

AI developers around the world are already starting to adopt methods like GRPO in the development of state-of-the-art models. DeepSeek-R1 has proven that it's possible without incurring enormous costs. I'm currently focusing on a public project called "Open-R1" (4), which plans to disclose not only the training data but also the code, which was not revealed with DeepSeek-R1, and I believe this is revolutionary.

Open-R1

Of course, it is expected that such projects will start worldwide, and I am looking forward to that. It's exciting!

 

How was it? The landscape surrounding generative AI has changed in an instant. New generative AI models will continue to be created. It's really hard to take your eyes off of it. I will continue to deliver further news. Stay tuned!

 

























 
 
 

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

Marketing AI agents for customer targeting in telemarketing can also be easily implemented using the new library "smolagents." This looks promising!

1. Marketing AI Agent

To efficiently reach potential customers, it's necessary to target customers who are likely to purchase your products or services. Marketing activities directed at customers without needs are often wasteful and unsuccessful. However, identifying which customers to focus on from a large customer list beforehand is a challenging task. To meet the expectation of easily targeting customers without complex analysis, provided you have customer-related data at hand, we have implemented a marketing AI agent this time. Anyone with basic Python knowledge should be able to implement it without much difficulty. The secret to this lies in the latest framework "smolagents" (1), which we introduced previously. Please refer to the official documentation for details.

 

2. Agent Predicting Potential Customers for Deposit-Taking Telemarketing

Let's actually build an AI agent. The theme is "Predicting potential customers for deposit-taking telemarketing with an AI agent using smolagents." As before, by providing data, we want the AI agent itself to internally code using Python and automatically display "the top 10 customers most likely to be successfully reached by telemarketing."

While the coding method should be referenced from the official documentation, here we will present what kind of prompt to write to make the AI agent predict potential customers for deposit-taking telemarketing. The key point, as before, is to instruct it to "use sklearn's HistGradientBoostingClassifier for data analysis." This is a gradient boosting library, highly regarded for its accuracy and ease of use.

Furthermore, as a question (instruction), we specifically add the instruction to calculate "the purchase probability of the 10 customers most likely to be successful." The input to the AI agent is in the form of "prompt + question."

Then, the AI agent automatically generates Python code like the following. The AI agent does this work instead of a human. And as a result, "the top 10 customers most likely to be successfully marketed to" are presented. Customers with a purchase probability close to 100%! Amazing!

         "Top 10 customers most likely to be successfully marketed to"

In this way, the user only needs to instruct "tell me the top 10 customers most likely to be successful," and the AI agent writes the code to calculate the purchase probability for each customer. This method can also be applied to various other things. I'm looking forward to future developments.

 

3. Future Expectations for Marketing AI Agents

As before, we implemented it with "smolagents" this time as well. It's easy to implement, and although the behavior isn't perfect, it's reasonably stable, so we plan to actively use it in 2025 to develop various AI agents. The code from this time has been published as a notebook (2). Also, the data used this time is relatively simple demo data with over 40,000 samples, but given the opportunity, I would like to try how the AI agent behaves with larger and more complex data. With more data, the possibilities will increase accordingly, so we can expect even more. Please look forward to the next AI agent article. Stay tuned!

 
 
 

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.








I tried using the new AI agent framework "smolagents". The code is simple and easy to use, and I recommend it for AI agent beginners!

At the end of last year, a new AI agent framework called "smolagents" was released from Hugging Face (1). The code is simple and easy to use, and it even supports multi-agents. This time, I actually created a data analysis AI agent and tried various things. I hope it will be helpful.

 

1. Features of "smolagents"
The newly released "smolagents" has features that existing frameworks do not have. 1) First, it has a simple structure. You can execute an AI agent by writing 3 to 5 lines of code. It's perfect for those who want to start with AI agents. 2) Also, since it was released by Hugging Face, there are already a huge number of open-source models on the Hub. You can easily call and use them. Of course, it also supports proprietary models such as GPT4o, so you can use it for both open and closed models. 3) Finally, when you execute an agent, python code is generated and acted upon. Therefore, you can use the assets of the vast Python ecosystem, which is very convenient. Especially for those who specialize in data analysis like me, it is a perfect framework because you can use Python libraries such as sklearn.

 

2. An Agent for Predicting Credit Card Defaults

Now, let's actually build an AI agent. The theme is "AI agent by smolagent predicts credit card defaults". Normally, when building a default prediction model, you would code using machine learning libraries such as sklearn, but this time, I want to give it data and have the AI agent itself code internally using Python and automatically display the default probabilities of the first 10 customers.

For how to write the code, please refer to the official documentation , but here I would like to present what kind of prompts I actually wrote to make the AI agent predict defaults. The point is to specifically instruct it to "use sklearn's HistGradientBoostingClassifier for data analysis". This library is highly evaluated for creating machine learning models with high accuracy and ease of use. This is domain knowledge of data analysis, but by including that knowledge in the prompt, we expect to obtain higher accuracy.

Furthermore, as a question, I will add an instruction to specifically calculate "the default probability of 10 customers". The AI agent is input in the form of "prompt + question".

Then, the AI agent automatically generated the following Python code. Normally, this is what I would write myself, but the AI agent does it for me. And as a result, the default probabilities for 10 people are also shown. Amazing!

In this way, the user only needs to instruct "use sklearn to calculate the default probability", and the AI agent writes the code to calculate the default probability for each customer. And you will be able to make default predictions for each customer. I tried it with default prediction this time, but I think it can be covered to the probability in any business, such as marketing, customer churn and human resources. I'm looking forward to future developments.

 

3. Impressions after using "smolagents" for the first time

Until now, I used LangGraph to implement AI agents. I liked it because I could make various detailed settings, but it was necessary to code each of state, tool, node, edge, etc., and I felt that the hurdle was high for beginners to start with. After implementing it with "smolagents" this time, I found that if I coded according to the template, it would run by writing a few lines, so anyone could start. Of course, it fully meets the needs of AI developers, so I plan to actively use it in 2025 to develop various AI agents. I have published the code this time in a notebook (2). Please look forward to the next AI agent article. Stay tuned!

 

(1) Introducing smolagents, a simple library to build agents,  Aymeric Roucher, Merve Noyan, Thomas Wolf, Hugging Face, Dec 31,2024   
(2) AI-agent-to-predict-default-of-credit-card-with-smolagent_20250121

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.






Paying homage to AlphaGo, we've launched our own AI Go project at ToshiStats!

Reinforcement learning has become a hot topic since the release of OpenAI's o1-preview. Looking back, it was Google DeepMind's AlphaGo, released in March 2016, that truly brought reinforcement learning into the public eye. Go, with its vast search space, was traditionally a formidable challenge for computers. Amateur high-dan levels were roughly the limit at the time. However, AlphaGo, combining reinforcement learning and Monte Carlo Tree Search (MCTS), exceeded expert expectations, becoming the first AI Go player to defeat a top professional. Inspired by this, we've launched our own AI Go project, "ToshiStats-Go project," to research reinforcement learning. We're excited to see what we can achieve.

 

1. Creating a Go Game Environment

We've decided to build our own Go game environment from scratch. Given the exceptional coding capabilities of o1-preview, we're using it as a coding assistant for this project. We're iteratively developing the code by requesting o1-preview to generate the Go game environment code, executing it in Google Colab, then requesting further refinements based on the results, and repeating the process. Within a few iterations, we were able to establish a basic framework and a functional environment. While we can't perfectly implement a complex game like Go, we've created something akin to "simple-go." This should be sufficient for implementing reinforcement learning and improving its accuracy. Below is an example of o1-preview's explanation of a code modification. As you can see, it's quite detailed.

                                                      o1-preview's explanation of code modification

 

2. Trying a Game of Go

Let's give it a try! The current AI model plays random moves, so it's not very strong. As shown in the example below, a human can win with careful play. While a 9x9 board is available, the calculations can be time-consuming, so we'll stick with a 5x5 board for now. It's enjoyable enough, and if you'd like to try it yourself, please download the Colab notebook from our Github repository (1). A GPU is not required.

                                                                     Trial run of ToshiStats-Go

 

3. Perfect Go Rules Are Difficult

Go has some very complex rules. In particular, determining the life and death of stones, especially in the endgame, proved challenging. Implementing "ko" and "seki" also seems difficult. Connecting to an external Go system might solve these issues, but for now, we'll continue with a lightweight environment that completes calculations within the notebook to facilitate reinforcement learning experimentation. We'll strive to make this series engaging and easy to follow, comparing our progress with simpler games like Gomoku or connect five. We appreciate your continued interest.

 

So, there you have it! We've successfully implemented a Go playing environment in Colab. From here, we'll dive into reinforcement learning and begin training our AI Go player. Stay tuned!





 
 

1) ToshiStatsGo-project

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

Reflections on the Future of AI Inspired by the 2024 Nobel Prizes in Physics and Chemistry

Last week was truly astonishing. Two prominent figures in AI, Geoffrey Hinton and Demis Hassabis, were awarded the Nobel Prizes in Physics and Chemistry, respectively. To my knowledge, no one had predicted these individuals as Nobel laureates. The world must be equally surprised. I'd like to take this opportunity to reflect on their achievements and speculate on the future of AI.

 

1.The Nobel Prize in Physics

Let's start with Geoffrey Hinton, a professor at the University of Toronto, who has been researching AI since the 1970s. In 2018, he shared the Turing Award, a prestigious prize for computer scientists, with two other researchers. He's often called the "Godfather of AI." Now 76, he's still actively working. I actually took a massive open online course (MOOC) he offered back in 2013. It was a valuable lecture that led me into the world of AI. Over a decade ago, courses teaching Neural Networks were scarce, so I was fortunate to stumble upon his lectures. Back then, my knowledge was limited to logistic regression models, so much of what he taught seemed incredibly complex and I remember thinking, "This seems amazing, but probably won't be immediately useful." I never imagined he'd win the Nobel Prize in Physics ten years later. Fortunately, his lectures from that time appear to be accessible on the University of Toronto website (1). I highly recommend checking them out. (The Nobel Prize in Physics was awarded jointly to John Hopfield and Geoffrey Hinton.)

 


2. The Nobel Prize in Chemistry

The Nobel Prize in Chemistry recipient is considerably younger, Demis Hassabis, currently 48. He is a co-founder of one of the world's leading AI companies, Google DeepMind. AlphaFold2 is specifically cited for his award. It's a groundbreaking AI model for predicting the 3D structure of proteins, and is said to have made significant contributions to drug discovery and other fields. He is not only a brilliant AI researcher but also a business leader at Google DeepMind. When presenting to a general audience, he mostly talks about the achievements of Google DeepMind, rather than his personal accomplishments. There's no doubt that the catalyst that propelled this company to the top tier of AI companies was AlphaGo, which appeared about four years before AlphaFold2, in March 2016. The reinforcement learning used in this model is still actively being researched to give large language models true logic and reasoning capabilities. AlphaGo inspired me to seriously study reinforcement learning. I wrote about it on my blog in April 2016. It's a fond memory. (The Nobel Prize in Chemistry was awarded jointly to David Baker, John M. Jumper, and Demis Hassabis.)

                                                                                 AlphaGo

 

3. Scientific and Technological Development and AI

I completely agree that the two individuals discussed here have pioneered new paradigms in AI. However, their being awarded the Nobel Prizes in Physics and Chemistry is a landmark event, demonstrating that AI has transcended its own boundaries and become an indispensable tool for scientific advancement as a whole. Going forward, we need to discuss how to leverage AI and integrate it into all aspects of human intellectual activity. Further development might even lead to the kind of intelligence explosion described by Leopold Aschenbrenner's "SITUATIONAL AWARENESS" that I previously mentioned on my blog, potentially surpassing human intelligence. The implications of these Nobel Prizes are profound.

 

What are your thoughts? I'm a business person, but I believe the same applies to the business world. With the incredibly rapid pace of AI development, I hope to offer new insights based on a clear understanding of these trends. That's all for today. Stay tuned!

 


(1) X.post by Geoffrey Hinton,  Jan 16, 2019

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software