We are rapidly approaching the half-way mark for the competition where we will be giving away $10,000 of AWS credits split between the top 20 entrants. To make your job easier, we are releasing a few extra features and also providing more information about the competition tests.
Yesterday morning I wrote a simple algorithm for entry in to the Animal-AI Olympics. At this very early stage it tops the leaderboard and beats all our preliminary deep reinforcement learning benchmarks. The agent follows purely hand-coded rules based on the simple idea that moving towards green and yellow things is good and that red things are bad and should be avoided (this is generally true in Animal-AI world). This submission, at the moment, would put you in line to win $8,000 worth of prizes and provides a competitive baseline for other, more powerful, methods.
This post argues for the opposite view to the previous in the series. The question of whether (or when) we will create artificial consciousness is one of the most important of our time. Whether or not AI will be conscious will determine our moral responsibility towards it, and, perhaps, its attitude towards us. In this post I give five reasons to believe that we will never create such entities.
The question of whether (or when) we will create artificial consciousness is one of the most important of our time. The relationship we will have with future conscious AI could determine the future of the human race, indeed, of conscious life, in our little corner of the universe. In this post I give five reasons to believe that we will create such entities sooner rather than later.
A followup to the non-techincal interactive KL-Divergence and Gradient Descent tutorials showing gradient descent on the final 6 variable example from the initial post.
Integrated Information Theory (IIT) starts from five axioms, aimed to capture the essential aspects of every possible experience. These five axioms are then used to construct an information-theoretic theory of consciousness based on the physical cause-effect properties of a system. I propose to explore what happens when the axioms are translated to a level much closer to the original starting point, the representational level. This first installment looks at the axioms of IIT that we will take as the starting point for our theory, and suggests some modifications.
In previous posts I tried to 'save' our everday intuitions about experience. In this post I introduce the reverse playing card experiment, and my direct experience sampling results, both of which show my previous conclusions to be wrong. I then introduce my current attempt to think about the problem, through Dennettian Predictive Processing.
I present a classic experiment and show a nice example of the lack of precision in our peripheral vision. I argue that this doesn't show that we are commonly mistaken about the contents of consciousness, but just that the contents contain generally include high-level predictions of the world.
Is your refrigerator light on when the door is closed? How can you ever know? Perhaps consciousness works in the same way. Are you conscious when you're not specifically noticing it? How can you ever know? This post includes a tool for discussing this question that also works well as a mindfulness reminder.
Douglas Hofstadter thinks you are a strange loop. I think he's right. Unfortunately, the two best examples of strange loops are Gödel's incompleteness theorem and the human brain, neither of which are particularly easy to understand. In this post I break down the key components of strange loops (without too much logic). In the following weeks I will cover the ways in which we are strange loops, and how we can use modern AI to build them.
AI improves along the dimension that we use to measure it. If we use a human-inspired definition of intelligence to determine our measures of success, we should expect more human-like AI. If we use a machine-oriented definition of intelligence, we should expect less human-like AI. I analyse the two different definitions of intelligence and conclud that, whether wise or foolish, we are currently walking the path towards human-like AI.
In the Benevolent Artificial Anti-Natalism scenario it is imagined that a superintelligence, being not susceptible to existence bias, might realise that human suffering is inevitable and use its powers to compassionately prevent the human race from continued existence.