Icon Animation Blend Spaces without Triangulation

 

Icon Quaternion Weighted Average

 

Icon BVHView

 

Icon Dead Blending Node in Unreal Engine

 

Icon Propagating Velocities through Animation Systems

 

Icon Cubic Interpolation of Quaternions

 

Icon Dead Blending

 

Icon Perfect Tracking with Springs

 

Icon Creating Looping Animations from Motion Capture

 

Icon My Favourite Things

 

Icon Inertialization Transition Cost

 

Icon Scalar Velocity

 

Icon Tags, Ranges and Masks

 

Icon Fitting Code Driven Displacement

 

Icon atoi and Trillions of Whales

 

Icon SuperTrack: Motion Tracking for Physically Simulated Characters using Supervised Learning

 

Icon Joint Limits

 

Icon Code vs Data Driven Displacement

 

Icon Exponential Map, Angle Axis, and Angular Velocity

 

Icon Encoding Events for Neural Networks

 

Icon Visualizing Rotation Spaces

 

Icon Spring-It-On: The Game Developer's Spring-Roll-Call

 

Icon Interviewing Advice from the Other Side of the Table

 

Icon Saguaro

 

Icon Learned Motion Matching

 

Icon Why Can't I Reproduce Their Results?

 

Icon Latinendian vs Arabendian

 

Icon Machine Learning, Kolmogorov Complexity, and Squishy Bunnies

 

Icon Subspace Neural Physics: Fast Data-Driven Interactive Simulation

 

Icon Software for Rent

 

Icon Naraleian Caterpillars

 

Icon The Scientific Method is a Virus

 

Icon Local Minima, Saddle Points, and Plateaus

 

Icon Robust Solving of Optical Motion Capture Data by Denoising

 

Icon Simple Concurrency in Python

 

Icon The Software Thief

 

Icon ASCII : A Love Letter

 

Icon My Neural Network isn't working! What should I do?

 

Icon Phase-Functioned Neural Networks for Character Control

 

Icon 17 Line Markov Chain

 

Icon 14 Character Random Number Generator

 

Icon Simple Two Joint IK

 

Icon Generating Icons with Pixel Sorting

 

Icon Neural Network Ambient Occlusion

 

Icon Three Short Stories about the East Coast Main Line

 

Icon The New Alphabet

 

Icon "The Color Munifni Exists"

 

Icon A Deep Learning Framework For Character Motion Synthesis and Editing

 

Icon The Halting Problem and The Moral Arbitrator

 

Icon The Witness

 

Icon Four Seasons Crisp Omelette

 

Icon At the Bottom of the Elevator

 

Icon Tracing Functions in Python

 

Icon Still Things and Moving Things

 

Icon water.cpp

 

Icon Making Poetry in Piet

 

Icon Learning Motion Manifolds with Convolutional Autoencoders

 

Icon Learning an Inverse Rig Mapping for Character Animation

 

Icon Infinity Doesn't Exist

 

Icon Polyconf

 

Icon Raleigh

 

Icon The Skagerrak

 

Icon Printing a Stack Trace with MinGW

 

Icon The Border Pines

 

Icon You could have invented Parser Combinators

 

Icon Ready for the Fight

 

Icon Earthbound

 

Icon Turing Drawings

 

Icon Lost Child Announcement

 

Icon Shelter

 

Icon Data Science, how hard can it be?

 

Icon Denki Furo

 

Icon In Defence of the Unitype

 

Icon Maya Velocity Node

 

Icon Sandy Denny

 

Icon What type of Machine is the C Preprocessor?

 

Icon Which AI is more human?

 

Icon Gone Home

 

Icon Thoughts on Japan

 

Icon Can Computers Think?

 

Icon Counting Sheep & Infinity

 

Icon How Nature Builds Computers

 

Icon Painkillers

 

Icon Correct Box Sphere Intersection

 

Icon Avoiding Shader Conditionals

 

Icon Writing Portable OpenGL

 

Icon The Only Cable Car in Ireland

 

Icon Is the C Preprocessor Turing Complete?

 

Icon The aesthetics of code

 

Icon Issues with SDL on iOS and Android

 

Icon How I learned to stop worrying and love statistics

 

Icon PyMark

 

Icon AutoC Tools

 

Icon Scripting xNormal with Python

 

Icon Six Myths About Ray Tracing

 

Icon The Web Giants Will Fall

 

Icon PyAutoC

 

Icon The Pirate Song

 

Icon Dear Esther

 

Icon Unsharp Anti Aliasing

 

Icon The First Boy

 

Icon Parallel programming isn't hard, optimisation is.

 

Icon Skyrim

 

Icon Recognizing a language is solving a problem

 

Icon Could an animal learn to program?

 

Icon RAGE

 

Icon Pure Depth SSAO

 

Icon Synchronized in Python

 

Icon 3d Printing

 

Icon Real Time Graphics is Virtual Reality

 

Icon Painting Style Renderer

 

Icon A very hard problem

 

Icon Indie Development vs Modding

 

Icon Corange

 

Icon 3ds Max PLY Exporter

 

Icon A Case for the Technical Artist

 

Icon Enums

 

Icon Scorpions have won evolution

 

Icon Dirt and Ashes

 

Icon Lazy Python

 

Icon Subdivision Modelling

 

Icon The Owl

 

Icon Mouse Traps

 

Icon Updated Art Reel

 

Icon Tech Reel

 

Icon Graphics Aren't the Enemy

 

Icon On Being A Games Artist

 

Icon The Bluebird

 

Icon Everything2

 

Icon Duck Engine

 

Icon Boarding Preview

 

Icon Sailing Preview

 

Icon Exodus Village Flyover

 

Icon Art Reel

 

Icon LOL I DREW THIS DRAGON

 

Icon One Cat Just Leads To Another

Which AI is more human?

Created on Dec. 18, 2013, 4:11 p.m.

Disclaimer: I'm not a historian, or a student of AI, so please forgive any inaccuracies in my historic retellings...

Every now and again a article comes floating around the internet lamenting about the days of Good Old-Fashioned AI research, when men were men, and researching AI was like the adventures of a computer science version of Indiana Jones.

Good old fashioned AI was about using logic and reasoning to produce an intelligence. The approach was to make high level observations about a particular task or problem, and formalizing these in a logical and consistent way that would allow a computer to solve the same task when the situation varied.

The ultimate goal of this research was to create a machine with human-like logic and reasoning skills. This machine could then be given semantic knowledge about something or other, and use its skills to divulge solutions to new and unique problems.

But in the 1970s the AI Winter hit. This method of AI research, which had never claimed to be easy, had stagnated, and failures had accumulated. Funding was cut and for a long time it looked like AI was dead.

A new approach to AI appeared slowly gained traction. It was based upon statistical methods, and learnt from data, processing banks of information to try and make intelligent decisions.

Rather than reasoning, these methods appeared at face value to use mathematical tricks and brute force to get results. To the researchers from the good old days it was all artificial, and no intelligence. The new approach to AI was cold, mechanical. It was coined machine learning and damned by many for not being human.

This was all shown in the research of natural language. Early on great progress was made by Noam Chomsky and many other researchers in the understanding of the logical and systematic rules that encode natural and artificial languages. These opened large vistas of understanding and research in many other fields as well as language, but eventually their practically for real applications such as machine translation reached a stopping point.

It was so bad that IBM Researcher Frederick Jelinek became famous for his often quoted statement "Every time I fire a linguist, the performance of the speech recognizer goes up".

Natural language, with all its irregularities, was just not possible to encode in a handful of rules. Every slight variation and peculiarity broke these methods. The results were simply not of high enough quality to progress with processing times ballooning.

These days Google traverses the web and builds huge statistical models of language it uses to do its translation. All logic and reason is left at the wayside, and even the known and infallible structures of language are ignored. See: How I learnt to stop worrying and love statistics.


If human intelligence really is characterized by logic and reason it would entail that I could teach someone a foreign language simply by passing them a sheet with the grammar on it, and a list of words with their meanings.

Providing our brains were the logical reasoning machines, as assumed by early AI research, this should be sufficient to teach me something.

Of course this is not true. Even if I know you are not lying you cannot teach me something by just telling me it. While some humans are good at logic, none of us are good enough to build those kinds of connections in our brain without fretting.

On the other hand all of us are excellent at pattern matching. This means the inverse approach to teaching is almost always better. It is better to give someone a lot of examples, and to watch as they divulge the logical rules that govern the system themselves.

This isn't specific to language. Even in an extremely logical domain such as mathematics the most effective teachers teach examples first and the generalizations after. Talking about generalizations first and being specific later is an easy way to confuse your students.

Humans appear to learn the same way machines do.


Imagine we are trying to write an AI which can distinguish images of apples from oranges. In machine learning we give it some training data with correct answers and from this it learns how to classify new fruit.

If this system encounters apples 75% in training it will bias its classification toward apples, and only pick oranges if it is really sure of the result. This is called a prior probability.

Think for a second how horrible this idea sounds to good old fashioned AI researchers. The information as to if it is an apple or and orange is fully encoded in the image! The decision as to if it is an apple or an orange has nothing to do with what this system has encountered before!

The idea of probability as a whole appears horrible to these researchers. An thing is not 60% an apple and 40% an orange. It is either one or the other.

But prior probabilities have become are cornerstone of machine learning methods and are only disregarded at all costs by researchers. There is very good reasoning for this. Is because prior probabilities, like many other aspects of machine learning, are very human in their origin.

Prior probability is linked to the human concept of experience. Which is embodied in the storytellers guidance show, don't tell.

Many stories can be boiled down to a couple of sentences that state the point the story is trying to make. For example love conquers all or greed doesn't pay. But simply telling someone this sentence doesn't provide almost nearly the same impact as getting them to read a novel about it. Nor is it nearly as enjoyable, or accurate.

A novel adds weight and experience to certain situations. As humans we are naturally reserved creatures and we need engagement and evidence to believe something is true. Someone cannot just tell us it.

This is like the prior probability in statistical models. It makes us wary of things we have not seen before and gets us to hedge our bets when other factors make us uncertain. The real world requires scientific analysis to understand. Is it any surprise that machines should need to do this too?

The new AI has moved the logic from our conscious mind, to our subconscious and biological minds. It has rephrased the question from how to do we do this? to how to we learn to do this?. Perhaps in some people's minds we have still made a machine that is dumber before. If this is the case, why can it do so much more!

Not only does the new approach perform much better at many tasks, but it is arguable more human too. This asks a number of questions.

Are we really intelligent? Are we really in control? Are we really logical? I know what I think...

github twitter rss