Please Note: This is an Exponential Dutch Auction (no settlement). The price you purchase is the price you will pay.
An ideal human being is one big optimization problem. We have a goal - we optimize our actions to maximize the outcome of this goal. In our initial days, we optimize to stay alive, during school, we optimize to study well, gain skills and obtain the best scores.
As we progress in our careers we optimize to get the most amount of money, material growth, and maybe even influential growth such as fame.
As we age we optimize to minimize diseases and maximize our lifespan - we workout, we eat healthily, we retire, and rest
As ideal as it sounds, how do we know what exact actions would lead to maximizing my outcome? At this very moment - instead of reading this should you be doing something else that would make your life’s outcome better? Maybe, maybe not - We don't know... Although, we do find out through trial and error.
In the grand scheme of things we’re just an agent exploring actions around us, learning which ones increase our reward and choosing the path that seems to lead to the maximum reward. Cerebellum is a window (pun intended) of the human mind - a continuously evolving art piece that is built on top of a reinforcement learning agent that is learning and optimizing and evolving as you see it.
The project starts with an untrained model of a reinforcement learning agent that is deployed in an environment where it is faced with positive reward and negative reward. The agent is also equipped with a bunch of eyes that lets it sense the objects around it and judge if it wants to go ahead and consume the object or abstain and move in a different direction. The agent here is optimization to maximize its positive rewards while avoiding the negative ones. How does the agent know if a reward is positive or negative? - It learns that after experiencing a bunch of samples. And the magic is - all of this happens in your browser. The seemingly random motion of the agent at the beginning running into everything it sees subsides with time as it learns better about the environment and only hits on the positive rewards. How fast does this happen? Depends on your luck - every art has agents with varied number of eyes and range of these eyes. In addition with every refresh your agent would start from a new place, taking a new path - a path that might lead to it learning things at a different pace depending on what it comes across as it explores. To add some practicality - the environment also has some walls, elements that just block the agent from moving doesn’t add or decrease rewards. Hinders the agent’s movement, lets it find a different path - sometimes this is needed for the agent to strategize, sometimes it's just a hindrance… I’ll leave this to your imagination as you see the agent in action… The seemingly random motion - the path towards true understanding - the process of optimisation is captured by the windows. Windows that show the beauty of this process expressed in a variety of ways.
Cerebellum is a window into the black box of AI, an unapologetic exaltation of the unpolished, an ode to the process and its aesthetic vernacular.
On mobile, tap to switch from 2d to 3d
Library
three@0.124.0
Display Notes
This is an interactive/animated artwork.
Creative Credits
Most of the code belonging to the Deep Q Learning agent is a modified version from Andrej Karpathy's ConvNetJS. This portion of the code is covered by the following: The MIT License Copyright (c) 2014 Andrej Karpathy
Charitable Giving
15% of the proceeds above resting price will be donated to Room to Read. Room to Read is creating a world free from illiteracy and gender inequality by helping children in historically low-income communities develop literacy skills and a habit of reading.
Artist Website
Project ID
0x99a9b7c1116f9ceeb1652de04d5969cce509b069-412
License
0
Items
0
Owners
--
Total Volume
--
Floor Price
--
Listed
- Items
Last checked: 0s ago
Loading Assets...