AGI inference from specific examples
Firstly the idea is that there are certain types of situations where you can do some mean sampling of the average interactions from the system. If you speak to 100 people and ask them their height they’re going to give you a range of answers. But you can infer something about the behaviour of peoples heights from interacting with them in this way by getting some average of the heights queried. You get some deriavtive information from this too which is to say that you can look at the variance from the mean and then understand that height in adults tends to vary by some amount. But if you were an alien coming down to earth you could get a good aproximation about the behaviour of adult human heights.
The trick here is that the thing you are measuring has to be linear. If there’s some non linearity then you could be put off by some outlier. You cant infer much about the earning potential of adult humans by asking them about their income.
The fixation of many right now, and nearly everybody in the AI community is the prediction of the impact that AI and AGI will have on society. We want to know, in what way this thing we are building will be good or bad for us.
The first example that nearly everyone is turning to is the idea of a tutor in your pocket. Large langauge models are not perfect, but they get us a massive way towards having a expert level tutor in our pocket at all times. We should start to see non linearities right away from this. From when I started using chat-gpt like systems by general level of productivity has risen by some large amount. The increase is substantial. I get more done, I spend much less time debugging, far less drudgery. For simplicity lets say I have had a 2x improvment in productivity. My dad, who works in construction, when I showed him got chat gpt to write him a chord progression. He uses it from time to time, but his productivity hasnt 2x. This looks like a non linearity to me.
But the inclination is to say that because I am 2x more productive that the society will start to accelerate at this rapid pace. The great stagnation is over. Tyler cowen stops bloging. We all settle in to our cottage core homes on mars. But this is a falacy. I will admit that something like this looks like the type of thing to have large large effects. BUT the prediction of large effects is completely different from the prediction of what those large effects are.
My friends growing up are smarter than you, yet you will call them dumb.
I grew up relatively poor. The guys I hung around with growing up mostly went on to manual labor jobs and are making decent money right now. They’re for sure making more money than most of the people who went to college. You speak to them and they don’t give off the impression that they’re intellectuals tho. But they can instictively turn a screwdriver the right way without repeating some stupid mneumonic. They can hit a nail with a hammer perfectly everytime without missing, using full force, in fact they could probably do that from the day they first tried to do it. Meanwhile, some college educated knob will look down on these people when they ask if they saw the match on the weekend. Meanwhile the college educated man cannot hold a drill for 2 seconds without it shaking all over the place looking like a moron.
What’s the point of this besides sticking up for the little guy?(he’s probably bigger and stronger than you too) It’s that there was so much panic in the past as to the threat of automation for non college educated men. But it seems as if the largest threat has come to college educated men. These stupid people you are concerned about are stupid in your eyes because you’ve been fed some lie that you are special since you were a child. They may end up being far more adaptable and robust than some politician or journalist will ever be.
It’s a lot easier for these people to find some other thing to do do or work on. It’s the white colar worker who has spent most of his life in a box, passed his test, worn his tie or later worn his sweater, and not had to ever expand out of his comfort zone.
Again what’s the point of all of this? it’s that our predictions about what might happen in the future are subject to all kinds of errors and lenses of distortion.
Errors by induction and errors by probability
There seems to be two kinds of errors. The first is the kind I was speaking about before. I followed Gary Marcus for a while to see what he was talking about. I have never been more put off by a person in my life. He sticks to some notion of symbolic AI being the only path forward, and that we need to understand how these things work etc. He makes the error of generalisation often.
Him and people like him make a claim to scale and yet try to think that something of this scale of complexity can be brought under control or understood. We have a rough idea about how airplanes work, but they do fail from time to time. When given this prompt, Marcus will say something like they work very very often and we actually have good understanding of how an airplane works and how it usually fails.
AI is not a washing machine. Predicting the effect of AI on society is a fools errand. We barely understand how the economy works, and yet we’ve been thinking about it for several centuries, and we waste millions annually trying to think about how to predict what it will do in the future.
It’s a useless endevour. The only thing that I can think is that AI is like adding 100 interns to each individual person who uses it. If chat gpt has 100M users then we have what is the potential to be an extra 10B people in existence. Economic models have to go in the bin.