Gemma2-2B: A Small Yet Powerful Generative AI - A Hands-On Review

Today, we'll be diving into Google DeepMind's recently announced compact generative AI model, "Gemma2-2B" (1), and running a simple demo. Gemma is an open-source library. While medium-sized models with 70B and 9B parameters are already available, this latest release boasts a significantly smaller 2B parameter model. It promises remarkable performance despite its size, generating considerable excitement. Let's take a closer look.

 

1. Remarkable Fundamental Performance

Despite its compact size, the Gemma model exhibits impressive performance, as detailed below. Surpassing GPT3.5 is a feat unimaginable just a year ago. The rapid advancements in open-source models continue to amaze.

Google's website describes it as follows (1):

""This lightweight model produces outsized results by learning from larger models through distillation. In fact, Gemma 2 2B surpasses all GPT-3.5 models on the Chatbot Arena, demonstrating its exceptional conversational AI abilities.

The "distillation" technique mentioned here is key to enhancing the performance of smaller models. It's employed not only in Gemma but also in Llama3 and various other small models, making it a concept worth remembering. With the performance of a 2B parameter model reaching such heights, it's tempting to explore its capabilities. Let's move on to the demo.

 

2. Performance Check with a News Article Classification Task

For this demo, we'll tackle the task of classifying Japanese articles from the publicly available Livedoor-news dataset (2) into five genres. We'll fine-tune the Gemma2-2B model and evaluate its classification accuracy. Since we're using Japanese articles, this will also assess its multilingual capabilities. Let's get started!

The following article is an example from the validation data. The model's task is to identify this article as belonging to the sports category.

                Example of validation data

Specifically, each article is categorized into one of the following categories. The goal of today's demo is to improve the accuracy of this classification.

  • 'kaden-channel' (Electronics)

  • 'topic-news' (General News)

  • 'sports-watch' (Sports)

  • 'it-life-hack' (IT/Life Hacks)

  • 'movie-enter' (Movies/Entertainment)

We prepared 100 samples for training data and 1000 samples for validation data. We'll apply fine-tuning using the impressive quantization tool Unsloth, and the data will be in the Alpaca format. For details, please refer to this link (3).

Without extensive tuning, we achieved an accuracy of 81.5%, as shown below. Considering the small training dataset of only 100 samples, this is an impressive result. With further optimization, the accuracy could likely be improved. It's hard to believe this performance comes from a model with only 2B parameters. Its ability to handle Japanese text is also commendable. The notebook used for the demo can be found here.

 

3. Limitless Potential Applications

With such high performance in a small model, the possibility of implementation on devices like smartphones, previously deemed impractical, becomes a reality. It also opens doors for applications where cost and computational speed were prohibitive. It seems particularly well-suited for customer service applications requiring real-time responses. Additionally, it could be deployed in developing countries where the cost of using frontier models like GPT4 has been a barrier. The future possibilities are truly exciting.

 



So, what did you think? The Gemma2-2B model can run on Google Colab's free T4 GPU, making it a valuable asset for startups like ours. It's truly remarkable. The small yet powerful Gemma2-2B model is poised for widespread adoption. At ToshiStats, we're committed to developing tuning techniques to maximize the benefits of open-source libraries. We'll be sharing more on this blog in the future. That's all for today. Stay tuned!

 
 

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

Google DeepMind's new prompt engineering technique, "Many-Shot In-Context Learning," is amazing!

I recently came across an interesting research paper, "Many-Shot In-Context Learning" (1), by Google DeepMind, and I'd like to share a brief overview. Although it's a highly technical paper, it offers valuable insights that we can apply to our own prompt writing. Let's dive in.

 

1. Utilizing Context Effectively

When you write prompts for language models or generative AI like ChatGPT, you probably input the information you want, like a search engine, such as "What is the capital of Japan?" However, generative AI can handle much larger amounts of information. For example, as shown in the chart below, you can load a PDF document and then write a prompt like "Summarize this," and the AI will output a summary of the PDF's content. Think of a prompt as an "instruction to the generative AI." The additional information you provide is called the context.

 


2. What's Needed to Use Generative AI in a Business Setting

Now that we have a basic understanding of how to use generative AI, let's consider what's needed to use it in a company or business setting. Obviously, when you represent your company and interact with customers, you wouldn't express "personal opinions or feelings." You wouldn't say, "I personally don't think this new product will sell." Specifically, companies have established rules and manuals that employees must follow. Normally, employees cannot violate these rules. Therefore, to use generative AI in a company, it must output answers that comply with each company's "rules and manuals," not just general answers. So, how do you convey these rules to the generative AI? One way is to input the "rules and manuals" directly into the generative AI along with the prompt, as shown in the chart above. Many recent generative AIs have "context windows" of 100,000 tokens or more. This represents the amount of information that can be input and output at once, and 100,000 tokens is about 70,000 words in English. You can input a considerable amount of "rules and manuals." Some models, like Google's Gemini 1.5 Pro, can input up to 2 million tokens, which is enough for about 3,000 pages of English manuals. That's amazing. These context windows are sometimes called "long context windows."

 


3. Many-Shot In-Context Learning

"Many-Shot In-Context Learning" is a technique that utilizes these "long context windows" even more effectively. You may have heard of a similar term, "Few-Shot Learning." "Few-Shot Learning" is a method where you first provide the generative AI with a few "question and answer pairs" as examples and then ask the question you want to know. For instance, you might give examples like "The capital of the United States is Washington, D.C." and "The capital of China is Beijing," and then ask the AI, "What is the capital of Japan?" "Many-Shot In-Context Learning" increases the number of these "question and answer pairs" to 10-10,000. This is said to improve accuracy. The graph below shows that in machine translation and summarization tasks, increasing the number of examples to 500-1,000 improves accuracy. 2 to the power of 10 is 1024. The idea is to put as many examples as possible into the "long context window" since it can easily handle them.

The relationship between accuracy and the number of examples in machine translation and summarization.

 


What do you think? If simply increasing the number of examples improves accuracy, it might be worth trying. For those who say, "I can't create so many examples myself," "Many-Shot In-Context Learning" also suggests a method to create synthetic data using an LLM (language model). If you're interested, please check out the paper. But if it's just about 10 examples, you could probably create them yourself. I'll give it a try and update here if I get good results. That's all for today. Stay tuned!

 






1) "Many-Shot In-Context Learning", Rishabh Agarwal, Avi Singh, Lei M. Zhang, Bernd Bohnet, Luis Rosias, Stephanie Chan, Biao Zhang, Ankesh Anand, Zaheer Abbas, Azade Nova, John D. Co-Reyes, Eric Chu, Feryal Behbahani, Aleksandra Faust, Hugo Larochelle, Google DeepMind, 22 May 2024,  https://arxiv.org/abs/2404.11018



Copyright © 2024 Toshifumi Kuga. All right reserved




Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

Google introduces new open-weight generative AI "Gemma2". The competition with Llama3 has finally begun!

Google has finally introduced a new type of open-weight generative AI, "Gemma2" (1). Although it had been previously announced, it came out sooner than expected. As shown below, the 27B model boasts an impressive 12th place on the leaderboard, closely rivaling larger models. A technical report (2) is also available, so let's take a look at what kind of evolution has occurred.

LMSYS Chatbot Arena Leaderboard

 

1. Model Architecture

Gemma2 adopts the familiar decoder-only transformer architecture. It's the same as GPT4. The context window, which indicates the amount of information that can be input and output at once, is 8192 tokens. The model structure is largely the same as Gemma1, but according to the technical report, the following points have been updated:

“We alternate between a local sliding window attention (Beltagy et al., 2020) and global attention (Luong et al., 2015) in every other layer. The sliding window size of local attention layers is set to 4096 tokens, while the span of the global attention layers is set to 8192 tokens.”

Global attentional model (3)

Comparison of full self-attention pattern and other attention patterns (4)

 

2. Pre-training

Gemma2's training data is as follows:

  • 27B model: 13 trillion tokens, primarily English data

  • 9B model: 8 trillion tokens

  • 2.6B model: 2 trillion tokens

"These tokens come from a variety of data sources, including web documents, code, and science articles.  Our models are not multimodal and are not trained for state-of-the-art multilingual capabilitiesthe.”

“same tokenizer as Gemma 1 and Gemini: a SentencePiece tokenizer with split digits, preserved whitespace, and byte-level encodings. The resulting vocabulary has 256k entries."

Knowledge distillation was also adopted for the 9B and 2.6B models. In my opinion, this might be the most evolved point of Gemma2. It's a Google-specific strategy to leverage the advantages of their existing large-scale generative AI to improve the performance of smaller models. The technical report explains in detail: "Given a large model used as a teacher, we learn smaller 9B and 2.6B models by distilling from the probability given by the teacher of each token 𝑥 given its context 𝑥𝑐, i.e., 𝑃𝑇(𝑥 | 𝑥𝑐). More precisely, we minimize the negative log-likelihood between the probabilities from the teacher and the student.

where 𝑃𝑆 is the parameterized probability of the student. In practice, we run inference on the teacher once and store the probabilities. Since the vocabulary has 256k entries, we only store a sampled subset of the teacher probabilities."

 

3. Post-training

This part uses techniques commonly seen in other generative AIs. According to the technical report, it is implemented in the following process:

“For post-training, we fine-tune our pre-trained models into instruction-tuned models. First, we apply supervised fine-tuning (SFT) on a mix of text-only, English-only synthetic and humangenerated prompt-response pairs. We then apply RLHF on top of these models with the reward model trained on labelled English-only preference data and the policy based on the same prompts as the SFT phase. Finally, we average the models obtained after each phase to improve their overall performance.“

It's noteworthy that knowledge distillation is adopted again. "We run behavioral cloning on synthetic and real prompts, and responses predominantly synthetically generated by the teacher, that is a larger model. We also run distillation from the teacher on the student’s distribution." In the future, knowledge distillation from large models to small models may become common practice. It's exciting to see.

 

What do you think? Gemma2 seems to be a model with high potential even in small sizes, and it's promising. The 2.6B model is also expected to be released soon. By the way, Google, which created Gemma2, and Meta, which created Llama3 that we covered last time, have been rivals in the open-source world for more than 8 years with "Tensorflow vs PyTorch". It seems that a similar battle has begun in generative AI as well. Next time, I'd like to try various things with the Gemma2 model. Stay tuned!

 
 

1) Gemma 2 is now available to researchers and developers, Google, 27 June 2024
2) Gemma 2 technical paper,  Google DeepMind, 27 June 2024
3) Effective Approaches to Attention-based Neural Machine Translation, Minh-Thang Luong Hieu Pham Christopher D. Manning Computer Science Department, Stanford University, 20 Sep 2015
4) Longformer: The Long-Document Transformer, Iz Beltagy,  Matthew E. Peters,  Arman Cohan, Allen Institute for Artificial Intelligence, 2 Dec 2020
5) On-Policy Distillation of Language Models: Learning from Self-Generated Mistakes, Rishabh Agarwal12, Nino Vieillard1, Yongchao Zhou13, Piotr Stanczyk1, Sabela Ramos1, Matthieu Geist1, Olivier Bachem1, 1Google DeepMind, 2Mila, 3University of Toronto, 17 Jan 2024

 

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

Can you believe it? A prediction that "AGI will appear before us in 2027" has been announced by a former OpenAI researcher!

A surprising prediction has been announced. Leopold Aschenbrenner, a former researcher at OpenAI, claims that AGI, which matches the capabilities of human experts in various fields, will emerge in 2027, just four years from now. It’s hard to believe, but let’s take a look at his argument.

 

1.From the Past to the Present, and into the Future

To predict the future, it is important to understand how generative AI has developed from the past to the present. Here, the author introduces the concept of OOM (Orders of Magnitude). Simply put, if you create a graph where each unit increase represents a tenfold increase, the trajectory becomes a straight line. OOM=4 means 10,000 times. The vertical axis of this graph represents the computational power (physical computation and algorithmic efficiency) and is displayed in OOM. The current pinnacle of AI, GPT-4, is used as the benchmark.

GPT-2 appeared in 2019, and four years later, in 2023, GPT-4 was introduced. The performance improvement during this period is roughly OOM=5 (100,000 times). If GPT-2 is like a preschooler, then GPT-4 is like a smart high schooler. Now, if we extend this straight line from 2023 to 2027, we can predict that an AI with OOM=5 (100,000 times) higher performance than GPT-4 will be born. This level of AI is expected to achieve AGI. If it is reasonable to connect the points with a straight line, then this prediction is not entirely far-fetched.

 

2. From Which Fields Will AI Growth Emerge?

We have explained that AI will significantly improve its performance by 2027, but what technological innovations will make this possible? The author points to the following three drivers. This graph also has a vertical axis in OOM, so a one-unit increase means tenfold.

First, the blue part represents improvements in computational resource efficiency. This is achieved through the development of new GPUs and the construction of ultra-large GPU clusters.

The second green part is due to improvements in training and inference algorithms and training data. There are concerns that training data may become scarce in the near future, but it is expected that this can be overcome by generating synthetic data.

The third red part refers to advancements in technology that allow us to extract the necessary information from the raw AI, give precise instructions, and have the AI execute what we want. Even now, research on how to give instructions to AI, such as Chain of Thought, is actively being conducted. In the future, it is expected that AI will function as an agent and further develop, leading to significant performance improvements in AI.

Wow, that sounds amazing.

 

3. What Happens After AGI is Achieved?

Once AGI is achieved, the evolution of AI will move to the next phase. Here, the main players are no longer humans but countless AIs. These AIs are trained for AI development and can work continuously 24/7. Therefore, by taking over AI development from humans, productivity will dramatically increase, and as a result, it is predicted that Super Intelligence, which completely surpasses humans, will be born by 2030. The graph below illustrates this.

It was already challenging to understand the prediction of AGI appearing in 2027, but by this point, it’s honestly beyond imagination to think about what our society will look like. Work, education, taxation, healthcare, and even national security will likely look completely different. We can only hope that AI will be a bright star of hope for all humanity.

 

Let’s conclude with the author’s words. It will be exciting to see if AGI is truly realized by 2027. The original paper is a massive 160 pages, but it’s worth reading. You can access it from the link below, so please give it a try.

 

Again, critically, don’t just imagine an incredibly smart ChatGPT: unhobbling gains should mean that this looks more like a drop-in remote worker, an incredibly smart agent that can reason and plan and error-correct and knows everything about you and your company and can work on a problem indepen-dently for weeks. We are on course for AGI by 2027. These AI systems will basically be able to automate basically all cognitive jobs (think: all jobs that could be done remotely).

 

1) SITUATIONAL AWARENESS The Decade Ahead, Leopold Aschenbrenner, June 2024, situational-awareness.ai 

 

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.


Llama3-8B has shown impressive performance even when fine-tuned on Japanese data. Its high base performance likely plays a significant role in this.

In the previous post, we introduced the high performance of Llama3-70B. However, Llama3 also has a smaller 8B model, and I've been wanting to fine-tune it to fit my own tasks. Since it's small, it's cost-effective and fast, so if you have a clear task in mind, this 8B model will surely be an option. Therefore, this time, we will fine-tune the Llama3-8B model for the task of classifying the published Livedoor-news Japanese articles (3) into several genres, and check its accuracy. Let's get started!

 
  1. Creating an Alpaca-style dataset

Livedoor-news Japanese articles are divided into the following 9 genres. The distribution of each genre is shown in the following chart.

  • 'kaden-channel',

  • 'livedoor-homme',

  • 'topic-news',

  • 'sports-watch',

  • 'peachy',

  • 'dokujo-tsushin',

  • 'it-life-hack',

  • 'movie-enter',

  • 'smax'

Distribution and sample size of each genre

This time, we will randomly extract 1000 samples for both training and validation data, and actually classify each article into the above 9 genres to verify whether high accuracy can be achieved. We have adopted Alpaca as the data format. As shown below, it consists of instruction, input, and output. Here, the instruction is common to all samples.

Example of Livedoor news

 

2. Fine-tuning using Hugging face TRL + "unsloth"

This time, we used Hugging face's TRL (1), a library for fine-tuning LLMs, along with "unsloth", a library for accelerating training, to efficiently perform fine-tuning. The development environment was Google Colab, and we prepared a paid L4 (GPU) instance. The training time was about 100 minutes for 4 epochs. L4 has 22.5GB of GPU-RAM, which is large enough for this training. Also, "unsloth" prepares a 4-bit quantized model for fine-tuning, so you can download and use it directly from Hugging Face Hub, which is convenient. This training process was based on the "unsloth" notebook (2). If you are interested in speeding up training, please check it out.

"Unsloth" model

 

3. Verify model accuracy

At first, I simply asked, "The skill to score a penalty kick from this impossible angle is amazing." The answer was "sports-watch". It's a soccer/football story, so I think it's a reasonable answer.

Next, I asked, "Which is better, iPhone or Android?" The answer was "it-life-hack". This is also a good answer.

It's hard to type in one by one, and the actual articles are longer and more complex. This time, I prepared 1000 validation data samples and tried it. The result was a very good accuracy of 94.5%. Since the input is Japanese, I thought Llama3 would struggle, but I was surprised that it easily exceeded 90%. It must be the effect of pre-training with a huge corpus of 15 trillion tokens. Even the 8B model seems to be practical in Japanese if fine-tuned.

 

How was it? Even though Llama3-8B is small, it has high potential and seems to be active in various places. Fine-tuning is required for each task, but "unsloth" can help speed it up. If you want to shorten the training time, please try it. This time, we were able to obtain sufficient accuracy in about 2 hours even with a general-purpose single GPU. It's a reliable ally for small startups like us! If you want to try it by yourself, you can use my notebook here.

We will update you as we gain new insights. Stay tuned!

 

(1) TRL - Transformer Reinforcement Learning https://huggingface.co/docs/trl/en/index

(2) Alpaca + Llama-3 8b full example.ipynb https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing#scrollTo=iHjt_SMYsd3P

(3) Livedoor-news Japanese articles https://www.rondhuit.com/download.html

 

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

Llama3: Exceeding Expectations and Expanding Horizons!

The release of the new LLM "Llama3" by Meta on April 18th has created quite a stir in the AI community (1). As a highly anticipated open-source model with performance expectations on par with GPT-4, its potential applications seem limitless.

Examining the performance on leaderboards (2), Llama3 is available in two sizes: 70B and 8B parameters. The larger 70B model, in particular, demonstrates capabilities that rival proprietary models such as GPT-4, Claude3-Opus, and Gemini 1.5 Pro.

To assess Llama3's performance, a test was conducted using a bank customer complaint classification task. The objective was to evaluate the model's accuracy in categorizing complaints without any fine-tuning.





1.To what extent can we discriminate between six categories of customer complaints without training?

The dataset consisted of customer complaints from a US bank, categorized into six product areas:

  • Mortgage

  • Checking or savings account

  • Student loan

  • Money transfer, virtual currency, or money service

  • Bank account or service

  • Consumer loan

Examples of these complaints, all in English, were provided.

A random sample of 500 complaints was used with a prompt instructing Llama3-70B to assign a product category to each complaint. The results were astounding, achieving an accuracy rate of 88.6%. This near 90% accuracy was unprecedented and speaks volumes about Llama3's potential.







2. Maintaining Accuracy with Japanese Data?

Considering the potential use of Llama3 in Japan, the English dataset was translated into Japanese using Google Translate. The classification task was then repeated with the translated data.

Despite Llama3's training data being predominantly English (around 95%), the model maintained an impressive accuracy rate of 82.8% with the Japanese data. This suggests that Llama3's capabilities extend beyond English and hold promise for multilingual applications.









3. Conclusion and Future Prospects

Llama3 has proven to be a top-tier performer, despite being open-source. This achievement deserves appreciation for Meta's contribution to the AI community. Hopefully, other companies like Google will follow suit and release their own open-source models more.

Further experiments are planned to evaluate the accuracy and computational speed of the smaller 8B model. Stay tuned for the results!

 



1) meta website https://llama.meta.com/llama3/
2) LMSYS Chatbot Arena Leaderboard https://chat.lmsys.org/?leaderboard
3) https://github.com/TOSHISTATS/Classification-of-Consumer-Complaints-by-Llama3/tree/main







Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

I tried the new generative AI model "Claude3 Haiku". Fast, smart, and low-priced. I want to use it as an AI agent!

On March 14th, "Claude3 Haiku" (1), the lightest model among the Claude3 generative AIs, was released and became available for use in web applications and APIs. I'm usually drawn to the highest-performing models, but this time I'd like to focus on the lightest one. Recently, algorithms that execute repetitive calculations like AI Agents have become more common. I want to use high-end models like GPT4, but they are very costly to run. So I was looking for a low-cost, high-performance model, and "Claude3 Haiku" is perfect as it costs 1/60th of the high-end model "Claude3 Opus" while still delivering excellent performance. I'd like to try it out here right away. The details of each model are as follows.




1. First, let's test the text

I checked if "Claude3 Haiku" knows about Hiroshima-style okonomiyaki, a hyper-local Japanese food. I used to live in Hiroshima, so I know it well, and I think this answer is generally good. The Japanese is clean, so it passes for now.




Next, I asked about transportation from Tokyo to Osaka. Unfortunately, there was one clear mistake. The travel time by bus is stated as "about 4 hours and 30 minutes," but in reality, it takes around 8 hours. This is a hallucination.



Then I asked about the "Five Forces," a framework for analyzing market competitiveness. It analyzed the automotive industry, and the analysis incorporates the latest examples, such as the threat of electric vehicles as substitutes, making it a sufficient quality starting point for discussion. However, the fact that it's not in a table format is a drawback.





2. Next, let's analyze images.

First, I asked about the number of smartphones, but unfortunately, it got it wrong. It may not be good at counting.




This is a photo of the Atomic Bomb Dome in Hiroshima. It answered this perfectly. It seems to understand famous Japanese buildings.





This is a photo of a streetcar running in Hiroshima City. I think it captures it pretty well overall. However, the streetcars don't run solely for tourists, so the explanation may be somewhat incomplete.




This is a flight information board at Haneda Airport. It perfectly understands the detailed information. Excellent.





Counting the number of cars in a parking lot is a difficult task for generative AI. This time it answered 60 cars, but there are actually 48. If the accuracy improves a bit more, it will reach a practical level, which is a bit disappointing.






3. Impressions of using "Claude3 Haiku".

Honestly, the performance was unbelievable for a general-use AI. The Japanese is natural and clean. The fact that it can incorporate and analyze images in the first place is groundbreaking. Multimodality has arrived in general-use AI. The calculation speed is also fast, and I think it will be applied to applications that require real-time responses. And the cost is low. This allows for plenty of interesting experiments. It's a savior for startups with tight cost constraints! I want to continue doing interesting experiments using "Claude3 Haiku". Stay tuned!

(1) Claude 3 Haiku: our fastest model yet   2024.3.14  Anthropic

Copyright © 2024 Toshifumi Kuga. All right reserved

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

The new generative AI "Google Gemini 1.5 Pro" is as amazing as expected!

Last month, I informed you that Google released a new generative AI called "Gemini 1.5 Pro" (1). And today, the "Gemini 1.5 Pro" finally arrived at Toshi Stats. I would like to experiment with it right away.



1. Can the 1 million token long context window really work?

Gemini 1.5 Pro boasts an incredibly long context window of 1 million tokens, which is unthinkable for previous LLMs. Because it is so amazing, anyone would wonder, "Can this really work?" Today, I would like to explore its capabilities here. I have prepared two experiments. The first one is to extract detailed information including numbers from relatively short materials, and the second one is to see if it can answer comprehensive questions well from materials over 200,000 tokens long. Let's begin.



2. Information extraction from Toyota Motor Corporation's financial results  

First, I will check if it can accurately extract numerical information from Toyota Motor Corporation's financial results for the fiscal year ended March 2023. The number of pages is 28, and the number of tokens is about 27,000, which is not a long material, but it is a task often seen in practice. This time, I have prepared 13 questions. Let's upload the material to Google AI Studio and ask the questions one by one.

Google AI Studio 




Here are the results. The correct answer rate is about 88%.

Questions and Results & Computation Time

For question 8, the financial results refer to ROE as "Return on equity attributable to owners of the parent company". This long phrase in Japanese (Kanji) may have been difficult for the American-born Gemini 1.5 Pro to understand. However, it showed the value by calculating from other related numbers by itself, so I gave it a △ evaluation with 0.5 points added. It seems to be able to handle Japanese quite well. By the way, the average computation time was around 30 seconds to output the results, so it seems that the computation resources have also been optimized. Just to note, this material was published on May 10, 2023, so it's possible that it has been incorporated into the training data.





3. Information extraction from the Financial Inspection Manual 

Next, I would like to analyze the lengthy financial administrative document "Financial Inspection Manual" with Gemini 1.5 Pro. This is an unforgettable document for those who have been in the financial industry since 2000 in Japan, as it sparked discussions on establishing risk management and governance frameworks. The number of pages is 366, and the number of tokens is 220,526, exceeding 200,000 tokens. Gemini 1.5 Pro is the world's first generative AI that can take in documents of this volume all at once. I was able to upload it smoothly as shown below. It answers properly when asked questions.

Uploading the Financial Inspection Manual

I actually asked, "What should be done to verify the appropriateness of the credit rating assignment process?" and received the following response (showing only 1 and 2 here as it is long).  To verify the appropriateness of the credit rating assignment process, the following items need to be examined:

* 1. Validity of rating criteria

    * Do the rating criteria appropriately reflect the credit risk of the debtors?

    * Are the rating criteria defined in an objective and verifiable manner?  

    * Are the rating criteria consistent with the debtor classifications?

    * Are the rating criteria periodically reviewed?

* 2. Appropriateness of rating assignment procedures      

    * Are the rating assignment procedures carried out appropriately in accordance with the rating criteria?

    * Are the rating assignment procedures implemented in an accurate and verifiable objective manner?

    * Can the rating assignment procedures timely reflect changes in the debtor's situation? 

    * Are the rating assignment procedures performed by an independent department not influenced by sales departments, etc.?

It continues with 3: Model Utilization, 4: Data Maintenance, 5: Internal Audit, and so on. It is a comprehensive and holistic answer that follows the principles of risk management and is also consistent with the descriptions around page 142 of the Financial Inspection Manual. Furthermore, the descriptions related to credit risk management are in the middle of this manual, and there were past comments that generative AIs tend to have lower accuracy in the middle parts of long data. However, Gemini 1.5 Pro does not seem to have any issues. Despite the specialized content, it provided a very good answer. The computation time was also around 90 seconds, which is sufficiently practical. It will surely make a good risk management assistant.  





How was that? It seems that it can analyze materials over 200,000 tokens quite accurately even in Japanese. It might also be useful for internal document search tasks at work. Next time, I would like to challenge even more difficult tasks in English. Stay tuned!"

 

1) Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context, Gemini Team, Google

Copyright © 2024 Toshifumi Kuga. All right reserved

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

A New AI Paradigm Emerges with Genie! How Will This Change the Future of AI?

Recently, Google DeepMind announced a foundation model called "Genie: Generative Interactive Environments." It is said to possess functionalities not found in existing AI, so let's explore this development here. Let's get started!




1.It can execute actions

According to Google DeepMind's blog, "We introduce Genie, a foundation world model trained from Internet videos that can generate an endless variety of playable (action-controllable) worlds from synthetic images, photographs, and even sketches." This suggests that it's not just about generating images but creating environments where you can execute actions and play. In simple terms, you could create a game from a single image. Sounds incredible! 




2. It can learn actions through unsupervised learning

The idea of creating an interactive environment from a single image where you can perform actions is quite imaginative. Especially deciding what actions to take seems challenging. However, Genie was trained on 30,000 hours of unlabeled video through unsupervised learning. There's a vast amount of unlabeled video available on the internet, providing plenty of material for training. Currently focused on 2D games and robotics, it seems applicable to various fields in the future. Amazing!



3. You can create games from your drawings

It appears there are no restrictions on the type of image; anything can be fed into Genie. Not only real photos and artworks but also simple illustrations are okay. Looking at the images below, indeed, a game has been created.




4. New "AI agents" will emerge from here

Though it's about games, by creating environments and executing actions within them, we will feel as if we are actually in those environments. When we talk about actions, it reminds us of "AI agents". They make decisions and execute actions on our behalf from a set of options. If these environments can be built from my illustrations and automatically set options, then creating "AI agents" will become significantly simpler. Google owns YouTube, which should provide ample material for learning about the world. It's likely evolving even as we speak.




Finally, I'd like to conclude with a message from Google DeepMind: "Genie introduces the era of being able to generate entire interactive worlds from images or text. We also believe it will be a catalyst for training the generalist AI agents of the future." . While Genie has not yet been released to the public, the future developments are truly exciting!

Copyright © 2024 Toshifumi Kuga. All right reserved

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

The Evolution of AI Accelerates: A Deep Dive into Google's "Gemini 1.5 Pro"

The pace of AI advancement is truly remarkable, and this year is no exception. Google has unveiled a new generative AI called "Gemini 1.5 Pro," which boasts a groundbreaking Mixture-of-Experts (MoE) architecture. Currently only available to a limited number of users, with broader testing to come, this technology presents intriguing breakthroughs that warrant a closer look.

 
 

1. Unprecedented Context Window of 1 Million Tokens

Gemini 1.5 Pro boasts a context window that is unfathomable by existing LLMs, capable of processing up to 1 million tokens. Research has even demonstrated data ingestion of up to 10 million tokens. This represents a revolutionary breakthrough, considering that GPT-4's context window is limited to 128,000 tokens (1).

Comparison of Context Windows for Different LLMs

With such an extensive context window, Gemini 1.5 Pro can ingest an entire book at once. Currently, when creating RAG systems and referencing internal documents, chunking is necessary to accommodate the LLM's context window. However, with Gemini 1.5 Pro, this requirement is minimized, simplifying RAG development and operation. Furthermore, the model maintains high accuracy, even with such a large context window, achieving over 99% accuracy in information retrieval tests (see chart below).

 
 

2. Remarkable In-Context Learning Capabilities

The ability to process vast amounts of data is not the only noteworthy aspect of Gemini 1.5 Pro. It also excels at understanding and applying this information to various tasks. This is evident in its in-context learning capabilities, showcased in a Kalamang language translation task. The model was trained using a Kalamang grammar book and dictionary, enabling it to translate between English and Kalamang.

English to Kalamang Translation Test

Gemini 1.5 Pro outperformed other models, achieving scores that rival those of human learners. This is an astonishing feat.

 
 

3. Towards Individualized Agents with Gemini 1.5 Pro

If a model can acquire translation capabilities simply by reading a grammar book, it stands to reason that it can also learn from knowledge systems in other domains and apply that knowledge to various tasks. In other words, Gemini 1.5 Pro has the potential to develop its own "frame of reference" that influences its understanding and values. The ability to incorporate a vast amount of data into its context through its extensive context window has significant implications in this regard. This is because it allows Gemini 1.5 Pro to potentially become an individualized agent with diverse perspectives in the future. The Kalamang translation experiment provides promising evidence of this potential.

Gemini 1.5 Pro is a remarkable advancement in AI technology, offering unprecedented capabilities in terms of context window size and in-context learning. "A host of improvements made across nearly the entire model stack (architecture, data, optimization and systems) allows Gemini 1.5 Pro to achieve comparable quality to Gemini 1.0 Ultra , while using significantly less training compute and being significantly more efficient to serve" according to the report(1). This is truly a testament to the rapid progress being made in the field of AI.

I am eager to experiment with Gemini 1.5 Pro once it becomes publicly available. Stay tuned for future updates!

Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context, Gemini Team, Google

 

Copyright © 2024 Toshifumi Kuga. All right reserved

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

"REST MEETS REACT" is a new prompt-engineering method using synthetic data. It holds immense potential for enhancing AI without relying on human-generated data

Happy New Year! Thank you for your continued support. Promptly, Google DeepMind has announced a new, advanced prompt engineering method suitable for the new year. It is a paper titled "REST MEETS REACT: SELF-IMPROVEMENT FOR MULTI-STEP REASONING LLM AGENT"(1). It incorporates fine-tuning with synthetic data, which looks promising! Let's get started.

 

1.Prompt Structure

This prompt is designed with a web Q&A system in mind that answers complex questions. The structure is as follows:

The blue part in the figure above represents the flow of the agent described in the prompt, aiming to answer complex questions using web search. In the latter half, "Relevance self-check" and "Grounding self-check" are functions for the agent to check its own answers. It's a self-check function. For a detailed explanation of the entire flow, please refer to the paper.

 

2. "Reward Model" - The Key to Success

Now, let's explain the core part of self-improvement. In a nutshell, it's about "creating new high-quality data and fine-tuning the model with it." . This function consists of three parts:

  • Grow: Start with a model capable of running Search Agent, using Google PaLM 2-L model for this purpose. Trajectories are collected based on a selected set of 2000 public questions. Trajectory, though an unfamiliar term, refers to the reasoning process and is commonly used in reinforcement learning.

  • Improve: Convert trajectories into data for fine-tuning, using the Reward model to select only high-quality data. No external data, like labels, are used.

  • Fine-tuning: Fine-tune a new model of the same size with this new data, ensuring it performs better than the original.

This process is repeated with the better model using the new data. As a result, accuracy improves while maintaining the original data, without adding external data. Therefore, the accuracy of the Reward model in ranking is crucial. The Reward model is constructed as a set of prompts in this paper. Let's look more closely at these prompts, showing only the initial part.

  • The goal of this rating is to filter out bad actions so that they'll be excluded from the fine-tuning dataset.

  • Overall, we want the agent to produce relevant and grounded answers with minimal steps. Anything deviating from this goal is considered bad.

  • If any element (thoughts, comments, etc.) is empty, then it's automatically bad.

"Filter out" indicates a method of discarding items that don't meet the standards and adopting only the high-quality data that remains. Please see the paper (p19) for details.

 




3.Improve Accuracy with Synthetic Data

Papers including this one have been published in late 2023, focusing on using the Reward model to create high-quality synthetic data for model fine-tuning and accuracy improvement. Vigorous research is expected to continue in 2024, yielding various results. Especially in the LLM field, collecting high-quality training data is becoming increasingly difficult, and fine-tuning with synthetic data is anticipated as a solution.


 


How was it? The improvement in model accuracy with synthetic data is expected to be a very effective development method for startups like us, who cannot collect vast amounts of data independently. Our blog will continue to follow these synthetic data and other technological innovations, so stay tuned. Wishing you a great year!






1) “REST MEETS REACT: SELF-IMPROVEMENT FOR MULTI-STEP REASONING LLM AGENT" Renat Aksitov†1 , Sobhan Miryoosefi†1 , Zonglin Li†1 , Daliang Li†1 , Sheila Babayan†2 , Kavya Kopparapu†2 , Zachary Fisher1 , Ruiqi Guo1 , Sushant Prakash1 , Pranesh Srinivasan3 , Manzil Zaheer2 , Felix Yu1 , and Sanjiv Kumar1,    1Google Research, 2Google DeepMind, 3Google †Core contributors, 15 Dec 2023, https://arxiv.org/abs/2312.10003





Copyright © 2023 Toshifumi Kuga. All right reserved





Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

The era of "agent-style applications" has arrived, earlier than expected and seems to be accelerating even further

On November 6, the OpenAI DevDay was held, marking its first annual developer's conference. The technological developments since the debut of GPT-4 in March 2023 were introduced at once. There's too much to cover comprehensively, so I'll leave that to OpenAI CEO Sam Altman, but here I want to raise three key points I've considered and explore them further.




  1. Price is Key

The anticipated price reduction has been realized. GPT-4 is roughly about 65% off. Of course, the reduction varies depending on usage. I've already tried the new GPT-4 Turbo for half a day, and it cost about $5, which would have definitely exceeded $10 before. This makes it more viable for Proof of Concept (PoC) use. It seems the time has come to utilize GPT-4's still unseen potential in various areas. A wallet-friendly approach is a welcome change for everyone.



2. Building AI Apps Without Being a Programmer

At this developer's conference, I noticed many features that operate with no-code. GPTs, which allow creation of customized ChatGPT in a dialogue format, is a prime example. The developer-oriented Assistants API also doesn't require coding if used with the Playground. With the code interpreter tool already implemented, writing prompts to invoke and execute it automates the rest. This is impressive.

I implemented a model to calculate default probabilities using a step-by-step prompt, from 1 to 5, with the code-interpreter turned on, without writing any specific code. When executed, the model was successfully created, and it performed tasks like calculating AUC and generating histograms as instructed.





3. Easy Construction of "Agent-Style Applications"

Listening to OpenAI CEO Sam Altman's presentation, I felt a strong emphasis on agents. The Playground Tool includes function calling, which seems to make it much easier to create agents that determine their next actions based on situations. While open-source implementations of agents have been increasing, I didn't expect them to be implemented this quickly on the OpenAI platform. Paired with GPTs, the year of 2024 feels like it could be the first year of "agent-style applications." This is truly exciting.

How about these new services? Following the announcements at DevDay, developers worldwide seem to be thinking about various AI applications. I'm also eager to start creating an agent-style application. Stay tuned!




Copyright © 2023 Toshifumi Kuga. All right reserved

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

New Prompt Engineering Method from Google DeepMind Surpassing CoT in Accuracy !

Hello everyone, how have you been? There are only two months left in this year. It has truly been a year of incredible AI advancements, and it doesn't seem to be slowing down. Recently, Google DeepMind announced a new prompt-engineering method called "Step-Back Prompting (1)." Let's dive into the details right away.


  1. Step-Back Prompting:

Coming from DeepMind, one might initially think it's a complicated method, but the concept turned out to be quite simple. Instead of directly answering the question input by the user, the process involves:

  • Creating a more generalized and essential question (Stepback Question)

  • Answering the generated question (Stepback Answer)

  • Producing the final answer to the user based on the original question and the generated response (Final Answer)

The paper abstract has the following note which could give insights on the Stepback Answer:

"The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise. — Edsger W. Dijkstra"



2. Automatic Generation of "Stepback Question":

The key to this method seems to be the effective creation of the Stepback Question. However, constantly coming up with the Stepback Question could be challenging. While searching for an easier way, an excellent automatic generation method was introduced in LangChain's cookbook (2), which seems to apply Few shot learning.

By presenting these two examples to the model first, when a new user question like "Was ChatGPT around when Trump was president?" is posed,

As shown, a more general question, "When was ChatGPT developed?" is generated. Using this to guide the final answer results in higher accuracy. Although not always 100% correct based on my own trials, the accuracy does seem notably higher. According to the paper, it even achieves accuracy surpassing GPT-4 in some instances.



3. Anticipation for Future Developments:

Since "Step-Back Prompting" has a simple structure, it seems versatile for various applications. It can also be combined with existing techniques like CoT. Looking forward to its future growth, it seems highly compatible with LangChain and easy to implement, which will likely lead to an increase in use cases.

So, what do you think? I will continue to experiment and if there are any significant findings, I'll share them here. Stay tuned!

1) “TAKE A STEP BACK: EVOKING REASONING VIA ABSTRACTION IN LARGE LANGUAGE MODELS" Huaixiu Steven Zheng∗ Swaroop Mishra∗ Xinyun Chen Heng-Tze Cheng Ed H. Chi Quoc V Le Denny Zhou Google DeepMind” Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, Karthik Narasimhan, Google DeepMind, 9 Oct 2023, https://arxiv.org/abs/2310.06117

2) langchain/cookbook/stepback-qa.ipynb https://github.com/langchain-ai/langchain/blob/master/cookbook/stepback-qa.ipynb

Copyright © 2023 Toshifumi Kuga. All right reserved

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.



GPT-4V is here. I tried it immediately and was amazed. It can do this too!

Sorry to keep you waiting. OpenAI's GPT-4 now comes with image recognition capabilities. To be precise, it was demonstrated when it debuted in March of this year, but it has only now been made available to users after half a year. I recently tried the new feature in ChatGPT+ and, in a word, it's incredible!

By the way, the image mentioned above was also created with a combination of GPT-4 and DALL-E3.

Now, let's start the experiment!


First, we'll start with recognizing mobile-phones. It can accurately count the number of mobile-phones. This is a piece of cake.

 

I thought flight information would be challenging, but it identified the destination impeccably. Since it's originally an excellent language model, it seems proficient in deriving meaning from images.

 

It can even read Osaka's Tsutenkaku tower. Local information is no problem.

 

For a change, I inserted an image of analysis results. It can read graphs effortlessly. This is impressive!

 

What shocked me was that it could easily count cars. Of course, it's not a specialized object detection model, so errors will always occur. I believe there were about 48 cars in this photo, but for general use, this margin of error seems acceptable. It's astonishing what it can do by just being given an image.

 

It can count cans, but the error is relatively significant. It might struggle with cluttered items.

 

It works well to read English text in an OCR-like manner.

 

It can also easily read the time displayed on electronic signboards.

How did you find it? Without any fine-tuning, it achieved this much. GPT-4V has just been launched, and various use cases are likely to emerge in the future. I look forward to introducing interesting examples here as they arise. Stay tuned!

 

Copyright © 2023 Toshifumi Kuga. All right reserved

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

Fine-tuning GPT-3.5 with synthetic text generated by GPT-4. The accuracy has improved! In the future, we might not even need training text???

Hello, despite being in the latter half of September, it is still quite hot in Japan. The photos feel mismatched, but I'm deliberately sticking to the autumn theme, hoping it will get cooler soon. However, it might stay hot for the rest of the month.

Now, about the fine-tuning of ChatGPT-3.5 that I introduced the other day, it's certainly a hot topic. I think there is a strong demand in companies to specialize its performance for specific tasks. For this reason, we conducted an experiment assuming cases where you would want to proceed even without data at hand by generating synthetic text and then fine-tuning it.

 
  1. Experiment Details

Just like the previous experiment, we set a task to determine which financial product a given English-language complaint is about. They are complaints for the banking industry, so the task involves differentiating between six types of financial products such as mortgages and bank accounts. The data used for fine-tuning was minimal, with 100 samples for validation, just like last time. However, the training data is different this time. We generated customer complaint emails using GPT-4, and they are indistinguishable from real ones at a glance. GPT-4's performance is indeed impressive. We generated 15 similar customer complaints for training and then proceeded with fine-tuning.

synthetic text generated by GPT-4


2. Experiment Results

Since this was our first time using synthetic text, we were worried about the outcome, but we were able to confirm the effectiveness of fine-tuning as follows. Though the improvement isn't dramatic with just 15 samples, the accuracy for this task has improved compared to the base GPT-3.5, which had an accuracy of 0.5 to 0.55.

For more details on the experiment, please refer to this notebook.

 

3. Discussion

Fine-tuning with synthetic text was a method not even considered before, but with the arrival of GPT-4, it's becoming more realistic. There are several points to consider, such as the number of samples and how to write prompts, but the advantage of being able to start even without data is significant. Currently, GPT-4 is the only option for generation models, but it seems like new models like Gemini from Google will also be available next year. Technology is advancing rapidly, so we can expect a lot more in the future.

So, what did you think? We will continue to conduct various experiments and share our findings here. See you again soon!




Copyright © 2023 Toshifumi Kuga. All right reserved

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

The "Graph of Thoughts" might pave the way for new avenues in "human and LLM (Large Language Model) collaboration"!

Last week, I came across an intriguing paper on Large Language Models (LLMs). It appears to further develop the "Tree of Thoughts" (ToT) reasoning method I mentioned before, introducing a new technique called the "Graph of Thoughts" (GoT). Let's take a closer look.

 
  1. Features of GoT

First, let's compare various methods using the chart provided in the paper.

The far right shows the newly introduced GoT. Key differences from ToT may be that GoT allows for the merging of thoughts, and that users can define the shape of the GoT themselves. Incidentally, this merging is referred to as "aggregation" within the paper. While it may seem similar to ToT, the differences might be significant. Let's explore this in more detail.

 

2. Four Key Modules

GoT (Graph of Thoughts) has the following four crucial modules. Understanding these will clarify the differences between it and ToT (Tree of Thoughts).

  • Prompter

  • Parser

  • Scoring & Validation

  • Controller

Let's look at each one in detail. The Prompter, as the name suggests, is the module responsible for creating prompts. The Parser extracts the required information, or "thoughts," from the LLM (Large Language Model). You might think of the Prompter as handling input and the Parser as managing output. Scoring & Validation is the module that evaluates the gathered thoughts. This evaluation allows us to select the thoughts worth keeping. Finally, let's elaborate on the Controller. It is responsible for adding new thoughts or merging multiple thoughts, a process referred to as "transform." The Controller decides which transformations should be applied to which thoughts and passes this information to the Prompter. It is a critical module for executing problem-solving strategies. It has two functions: Graph of Operations (GoO), which is an execution plan for operations defined by the user, and Graph Reasoning State (GRS), which maintains the ongoing LLM reasoning process based on the thought state.


3. Considering the Number Sorting Problem

Since merely talking abstractly may not advance understanding, let's consider an actual task. This time we will consider sorting a list of 64 numbers in ascending order. Here, we'll see how the Graph of Operations (GoO) comes into play. In the chart below, each thought is tagged with operations like G (Generate), S (Sort), K (Keep the best), and A (Merge). Initially, we take a list of 64 numbers and divide it into four lists, each containing 16 numbers. Each of these lists is then sorted and evaluated. Only the list with the highest accuracy is kept. These are then further merged to form a new list containing 32 numbers. You'll see various operations functioning as the process progresses.

For those who want to delve deeper, detailed explanations are available here, particularly in the green part of the chart above.

It might feel complex at a glance, but it's user-controllable, allowing you to incorporate your domain knowledge. I am excited to conduct various experiments in the future.

Thank you for your attention! I will keep you updated on the progress of GoT and hope to share more with you soon. Stay tuned!









1) "Graph of Thoughts: Solving Elaborate Problems with Large Language Models",  Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler, 21 Aug 2023, https://arxiv.org/abs/2308.09687v2







Copyright © 2023 Toshifumi Kuga. All right reserved




Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

Fine-tuning has come to ChatGPT. Its effects are outstanding, and if implemented according to the task, we can perhaps expect significant improvements in accuracy!!

Hello everyone, how are you doing? Although the illustration is autumn-like, it seems that summer will stick around for a while in Japan

While that was happening, I suddenly received a message from OpenAI saying, "The fine-tuning feature has been implemented." I have always fine-tuned open-source models, so I was a little disappointed that ChatGPT didn't have this feature. But it seems that it has finally made its appearance. I guess OpenAI got a little serious. Let's get started right away.

 
  1. Is fine-tuning effective for ChatGPT?

I'm sure you all want to know, "Does fine-tuning work well with ChatGPT?" So I created a small dataset and conducted a simple experiment. To put it briefly, "Something amazing is happening!" Below is the table with the results.

Accuracy for 100 samples

I had GPT3.5 perform a 6-class classification task and expected some fine-tuning effects. However, exceeding an accuracy of 0.8 was unexpected. The normal GPT3.5 only barely surpassed 0.5, so I initially thought that the model's potential was lacking. However, an accuracy of 0.88 appeared on the first fine-tuning, which was hard to believe. Upon changing the seed and refreshing the data, it still yielded an accuracy near 0.8, completely different from the normal accuracy. The compatibility between fine-tuning and ChatGPT must be outstanding.

 

2. Experiment Details

In this experiment, the task was to identify what type of financial product a given English complaint was about. This is a task of classifying 6 different financial products such as home loans or bank accounts, and the data used for fine-tuning consisted of 100 samples each for training and validation, which is a minimum configuration. The training results show a decrease in training loss and eventually seem to reach zero (actually it continues to go down further). Quick conclusion: it went well. Using this fine-tuned model yielded the results mentioned in section 1.

 

3. Discussion

Just by looking at the results of this experiment, we can't definitively say that fine-tuning always succeeds. Various cases will emerge in the future, and it will be important to make comprehensive judgments based on those results. Especially this time, minimal prompt engineering was done. Combining prompt engineering and fine-tuning to achieve the best performance is a future challenge. There are many points to consider, like cost and computation time. It will require trial and error. While GPT-4 indeed performs well with an accuracy around 0.8 for this task, its cost is high, and implementation isn't always straightforward. Even in such cases, the new weapon of fine-tuning has come into our hands, increasing our options and potentially moving us a step forward in problem-solving.

How was it? I would like to introduce more experiments and their results here in the future. Stay tuned!




Copyright © 2023 Toshifumi Kuga. All right reserved



Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

"Llama2" is a great LLM as it is Open source and for commercial use. I want to try many applications with this language model.

Hi friend, I would like to introduce a new LLM, which was released from Meta on July 18,2023. It is called “Llama2”. I have some experiments with this model. Let us start!

 

1. What is Llama2?

“Llama2” is language model from Meta AI. Many researchers are very excited because it is a open source and available for commercial usage. Its specs are explained in the table below.

 
 

2. Let us extract information from the article in English

I want to perform a small experiment to extract information from text.

  • sentiment

  • root cause of the sentiment

  • name of product

  • name of makers of product

I made my prompt and fiction story in the mail. Then run Llama2 13B chat. Here are the results

Woh, looks good! I can obtain the information I need from text. Unfortunately the model cannot output it in Japanese.

 

3. Let us see how it works against Japanese sentences

Next, I would like to apply the same prompt against Japanese sentences here.

Woh, looks good, too! Although the model cannot output it in Japanese, either.

 

4. Llama2 has a great potential for AI applications in the future!

Today I found that Llama2 works in English very well. When we want to minimize running costs for AI applications or keep secret/confidential data within our organization, this model can be a good candidate for AI models in our applications. It is great to have many choices of LLMs in addition to proprietary models, such as ChatGPT.

 
 

I want to mention a great repo on GitHub. It makes it easier to compare many open source LLMs, It is a strong recommendation for everyone who is interested in LLMs. Thanks camenduru!

Thanks for your attention! would like to follow the progress of Llama2 and share it with you soon. Stay tuned!


Copyright  ©  2023  Toshifumi Kuga  All right reserved


Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

"Tree of Thoughts" can go mainstream in prompt engineering!

Today, I found a very interesting paper called “Tree of Thoughts (ToT)”(1). With ToT, we can solve the tasks, where we could not do it before. So I want to share it with you and consider how it works together. Let us start now!

1.Chain of Thoughts (CoT)

This paper provides four kinds of prompting as the chart below says. The left one is called “IO prompting” and is relatively simple. The right one is the most complex, called “Tree of Thoughts (ToT)”.

Among four kinds of prompting, I focus on Chain of Thoughts (CoT) first because it gives us a fundamental space to explore. The paper says “The key idea is to introduce a chain of thoughts z1, · · · , zn to bridge x and y, where each zi is a coherent language sequence that serves as a meaningful intermediate step toward problem solving“. By CoT, we explore a prompting method for improving the reasoning abilities of LLMs and solve complex tasks effectively. Once we understand how CoT works, let us move on ToT.

 

2. Tree of Thoughts (ToT)

Let us expand CoT with tree search so that we can apply it to more complex tasks effectively. This paper says “we introduce a new framework for language model inference, Tree of Thoughts (ToT), which generalizes over the popular Chain of Thought approach to prompting language models, and enables exploration over coherent units of text (thoughts) that serve as intermediate steps toward problem solving.”. Sounds great! OK, let us consider how it works.

ToT has four steps to implement it. I would like to explain them step by step.

  • decompose the process into thoughts

    • each thought should be small enough so that LLMs can generate promising and diverse samples

  • generate states

    • generate potential thoughts from each state. There are two kinds of methods to do this according to this paper.

  • evaluate each state

    • LLMs evaluate each state to decide how a tree should grow

  • search for the best state

    • If the current state is not good enough, we should search into other branches. There are several search algorithms to do that.


3. ToT can be solved by MCTS

Although ToT can be solved with relatively simple Tree Search algorithms, we can use more advanced algorithms, such as Monte Carlo Tree Search (MCTS). It has been famous since AlphaGo defeated a professional human Go player in March 2016. In AlphaGo, MCTS is combined with Neural network. This is sometimes called “model guided Tree Search” and we do not need to search for the whole possible state anymore. In the picture, Demis Hassabis, Google DeepMind CEO, explained how it works(2).

It must be exciting when ToT can be searched by MTCS in the near future as wider and deeper states can be explored and it must provide us good results.

 

Thanks for your attention! I would like to follow the progress of ToT and share it with you soon. Stay tuned!

 

1) “Tree of Thoughts: Deliberate Problem Solving with Large Language Models” Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, Karthik Narasimhan, 17 May 2023, https://arxiv.org/abs/2305.10601

2) Using AI to Accelerate Scientific Discovery | Campus Lecture with Demis Hassabis, https://www.youtube.com/watch?v=Ds132TzmLRQ&t=1381s

 



Copyright  ©  2023  Toshifumi Kuga  All right reserved



Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.

“Function calling” is a game changer as GPT can access outside and be converted to our agents easily!

Today, I want to create web-site with a description of the Japanese sweets collection, just like “Dorayaki“ in the picture above. So I ordered my AI agent to create an awesome web-site. But is it really possible? I am sure yes, it is!. As you know, OpenAI created GPT, which is very intelligent large language model (LLM). On 13 June 2023, “Function calling” was introduced by OpenAI. It can bridge GPT to other systems, APIs and functions outside. Let me explain step by step!

 

1.What is the advantage of “Function calling”?

Function calling makes it easy for GPT to access functions outside. For example, when you want to create a web-site where Japanese sweets are explained to customers, you need to connect GPT to the function that can write code of web-site with HTML/CSS. With “Function calling”, GPT can call this function and pass the parameters, such as “explanations of Japanese sweets” to this function. Official documents says “The latest models (gpt-3.5-turbo-0613 and gpt-4-0613) have been fine-tuned to both detect when a function should to be called (depending on the input) and to respond with JSON that adheres to the function signature.”

 

2. The list of “functions” is key to set “function calling” up

“Function calling”looks great! But how can we implement in our code. I think it is so simple. Just prepare the list of functions. This should have

  • "name"

  • "description"

  • "parameters" : "type" , "properties", "required"

In ChatCompletion.create, we should add “functions=functions” because we want to call the function. The other part of the code has not changed so much. The code below shows us an example of functions, which comes from Official documents. Please look at these docs for the details if needed.

 

3. Let us see how the generated web looks like

OK, it is the time that we see the result from our agent. I instruct "Create a web-site for a pretty Japanese sweets collection" to our agent. Text of “title” and “explanation” are generated by GPT3.5-turbo and are sent to the function that creates a web. Here is the result. All are written in Japanese. The title means “a pretty Japanese sweets collection". The sentences of the explanation are pretty good! I do not think there is a need to fix or modify these sentences at all.

If you want to know more details with the code, you can see it here.

https://github.com/TOSHISTATS/Wagashi-Collection-Web-Generation-agent-by-GPT3.5#readme

 

Hope you can understand how AI agents work. I think potential use-cases of “Function calling”are limitless. I tried several use cases by “Function calling” and found that it can be a game changer to develop LLM application systems. I would like to update my article about AI agents by OpenAI GPT soon. Stay tuned!

 
 
 

Copyright ©  2023  Toshifumi Kuga.  All right reserved

Notice: ToshiStats Co., Ltd. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithms or ideas contained herein, or acting or refraining from acting as a result of such use. ToshiStats Co., Ltd. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on ToshiStats Co., Ltd. and me to correct any errors or defects in the codes and the software.