Building a Research Visual Map Using Whimsical

Building a Research Visual Map Using Whimsical

Tracking my experiments is becoming increasingly challenging. In my Reinforcement Learning using Layered Morphology (RLLM) research, I attempt to embed Large Language Models (LLMs) with ethical alignment - developmentally. To achieve this, I conduct various training experiments. The most extensive one involved 24 steps or training sets (RLLMv1), but I have since identified a simpler set with 10 steps, which I've already executed four times (RLLMv3, RLLMv7, RLLMv8 & RLLMv9). The challenge has been tracking all of the changes including the datasets and trained models. Fortunately, I discovered Whimsical, a tool perfectly suited to address this issue. (In case you are having issues tracking your projects - you can try the approach I shared here.)

What is Whimsical?

Whimsical is a versatile workspace designed for brainstorming and collaboration, offering features to create documents and boards, including mind maps, flowcharts, wireframes, and sticky notes. It's an effective tool, though I'm open to exploring others that may be even more efficient. You can visit the Whimsical website for more details, as my focus here is on how I've utilized it rather than the full range of its capabilities.

The Role of the Visual Map

Imagine a map that illustrates the journey of my alignment research. Each point, or node, represents a segment of the RLLM experiment, from its inception through the testing and learning phases. This visual representation acts as a detailed timeline, documenting each stage in the training processes. It provides invaluable insights into the intellectual evolution of the language models, offering a significant resource for those interested in delving into the nuances of their development.

Navigating Through the Visual Map

As you navigate the map, you'll encounter various icons—some cheerful, some less so. Each one provides a succinct summary of an experiment's results. Delving into each node, you'll find datasets, observations, training checkpoints, and blogs derived from the experimental reports, offering a wealth of insight into the RLLM experiments as a journey of discovery.

Understanding Successes and Setbacks

The true value of this wireframe lies in its user-friendliness. It transparently presents both triumphs and trials, emphasizing the importance of each experiment. As I tackle the complex issue of ethical alignment in language models, this visual tool simplifies the task of monitoring each iteration's progress and outcomes. It reinforces the notion that every effort, whether failed (Eg. RLLMv7, RLLMv8 & RLLMv9) or fruitful (Eg. RLLMv1, RLLMv2 &RLLMv3) contributes to the AI's learning process.

Closing Thoughts

This visual map is more than a mere instrument; it narrates the story of my research. For the inquisitive minds keen to learn about the evolution of my work, it provides a window into the authentic experiences of attempting to instill ethical alignment in language models. Should you have inquiries about the visual map or my research, please feel encouraged to engage in the comments below.


If you found this post helpful, consider clicking the share button or explore my posts on mental modeling in the age of generative AI. Additionally, if you are interested in viewing the services I offer, please feel free to browse them here. Also, consider checking out Doc Ligot’s webinars on Generative AI; the link is available here.

Exploring evolutionary psychology and archetypes, and leveraging gathered insights to create a safety-centric reinforcement learning (RL) method for LLMs