This document is a starting point for navigating and understanding the way I document my experiments and how I share the results. Having said this, my experiments are mainly aimed towards finding out how to implement a substrate for the ๐ Cognitive Mechanisms to be implemented on.
Explorations
At this point in time, I haven't created that many experiments but I envision them covering cellular automata, parallel computing strategies, server-client architectures, etc.
On organizing experiment notes
Each experiment contains a ton of ideas that are hard to organize in a single text file. I need to be able to play with these ideas while being able to revisit them later.
I think a good approach for now is to approach it in a component-based way like when writing UI code.
For this reason, I'm creating a folder with unorganized notes for each experiment. I'm calling these type of notes "Clip Notes" and use the ๐ icon to identify their type.
I also created a document type that represents tests for a given experiment. There are identified with the ๐งช icon. Each of these documents represents a tag in the experiment repository so that I can keep track of each test's results. Since these are mostly code-based experiments, test basically represent approaches or versions of each experiment until I reach a point of success or failure.
Published Experiments
Experiment Ideas
-
Think of the words needed to communicate with a [personal assistant, npc in a video game, etc] by manually simulating a real conversation. For example, if a video game, visualize the conversations and what you're doing and then pay attention to those words. If a personal assistant, write use case stories (literal stories) with dialogs and events that occur.
-
Random idea: What would happen if each component within a UI framework was an agent with some degree of autonomy or some kind of complex ego cell, and what if we use that to help us debug code or to validate data and operations? This would make sense by using a non-square ego cell setup, one that works with visual and hierarchical proximity instead.