Categories
2016 October

Supplementary

“I just received the following note from one of our Inner Circle members.  Below the note is my response.”


Not so sure climate change is a big issue anymore.  Bigger things to be concerned about..like robots taking over the world.  Fracking is a good thing.  When asked why he changed his “super green tune” so much from 8 years ago, when the same guy interviewed him?  He is now 97 and was 88 the last time they spoke.  He said, “we all grow up a bit with time.”  J

https://www.theguardian.com/environment/2016/sep/30/james-lovelock-interview-by-end-of-century-robots-will-have-taken-over

Brian Brittain


Mike’s response:

When you look at the burden of bias we carry and realize that machine learning absolves most of that through good AI, Lovelock is onto the central theme and double-edged sword of not just robots but the freedom of evolution that AI is not burdened with because of born-in-bias.

mike

7 replies on “Supplementary”

Mike, are you saying that machines can learn more objectively or purely than we can, whereas we will learn only what reflects our built in biases

Probably but machines are able to rework their own code, that’s essentially the role of machine learning., as I understand it.

Humans have serious limitations–generally-reworking our own code, probably being stuck there our whole lives…

mike

Machine learning has many facets to it and my take is that most of what we are seeing in the real world (as opposed to the academic / research world) is still just what the developers have programmed into the hardware.

Yes. There are evolutionary programming ideas out there where the code literally rewrites itself. No one outside research labs is willing to put up with all the “failures” that it takes to have them “learn from their mistakes” — especially for the big news and high risk applications like self driving cars! (Having done some simplistic forms of AI programming/training decades ago, I’d guess we aren’t much further than the level of paper training a dog when it comes to true machine learning. And I’m not seeing surrogate parents for nascent AIs as a way to avoid human biases…whether those parents are biological or software-based.)

The way you train neural network based AIs is to give them input sets of data with some form of “goodness” scoring on the outputs. You feed that sort of training data through the system until it consistently gives you the (good enough) answers you want. Then you give it related inputs and see what you get for the outputs, correcting it as you go by giving it ongoing, but more time delayed, goodness scores for a given result. (Like we used to do for voice recognition software like Dragon Dictate.)

This is in direct contrast to a rules based AI where you set acceptable boundaries and it is “good” while playing it safe. Think engineering tolerances rather than free-range parenting.

We have some chance of seeing novel “thinking” coming from machines at some point. I believe Watson, the Jeopardy playing system, was much more free form in terms of how it reached its answers than, say, Siri or the Google search engine which are both almost certainly rule based. Watson could do that without human set boundaries because Jeopardy was low risk — like so many of the “game playing” AIs in research labs.

I saw something about IBM repurposing the Watson system to diagnose medical conditions and doing better than physicians. This still implies that we eventually know what was wrong in a given situation and that people can distinguish what symptoms are significant enough to report — neither of which *I* have much trust in, especially after hearing how my wife (a nurse) correlates actions and symptoms into some strange causal relationships. (“I forgot those essential oils I’ve been giving him and the boy is having a rougher day” sorts of comments.) We let people run systems with their odd thinking but won’t quickly let a computer make the same decisions because of our own biases against it as non-human.

Some people seem to be stuck o the question of how you punish a computer that makes a mistake that leads to (or allows someone) to die in cases human decision makers feel it could have been prevented. And without a conscience, will people trust a system to make decisions that can cost human lives?

Apparently, we’re getting close with self driving cars…and we have already seen cases where the questions can arise of what human suffers in retribution for people dying at the hands of computers when others feel it was avoidable.

We are so human that even when we are educated, intelligent, and knowledgeable in an area, we still run the cognitive bias programs as ways to unload our mental systems. What makes you think (phrasing intentional) that we aren’t layering those same sorts of biases into code we write for the rule sets now and soon into how a machine can “rewrite” its own code?

I understand your point that the machines don’t have the inherent human biological / survival biases, but we humans are the ones designing the hardware and software that plays those same hard-wired roles. I don’t know that we can avoid them sneaking in even if we try to create a (mythical) unbiased, human-developed system.

Hmm. Speaking of human systems and rule sets, Mike, do you think you could encode enough “rules” for us to simulate Stratum I + CP-Red value basin style decision making — a virtual pinoy as it were? *That* feels like a useful construct for social simulations that might lead to better decision making, especially if it could be used as a stepping stone to building “rule sets” for more complex thinking and/or additional value basins. (Assuming red is the “simplest” set of value-based rules…)

Or, would starting with a hangry / hangsty child be a better place to start? (I have one of those myself, Jim. When his blood sugar drops, we all pay for it…)


Dr. Wayne Buckhanan

“There are evolutionary programming ideas out there where the code literally rewrites itself”

But even there, some human set up the system by which the code rewrites itself.

“Hmm. Speaking of human systems and rule sets, Mike, do you think you could encode enough “rules” for us to simulate Stratum I + CP-Red value basin style decision making — a virtual pinoy as it were?”

I would categorize stratum 1 and + CP-RED together or build one–who wants a more capable version of a “terrible-two” hehe.

But yes the idea is good, but “kinda” exists metaphorically and that’s google, I use it all the time for giving me guidance on everything to locations, drug dosage, currency valuation, etc. All the people I have here have to be told to use it, they never think of and they can’t figure out how to inquire when they do–they usually have to told.

Google itself is constantly rewriting its own” code,” (at some level) IMHO, as is Amazon, Wal-Mart, etc.

Armstrong Economics touts “Socrates” as AI…although I suspect rudimentary, but I think there is a lot more out there–at early stages–self-driving is probably Stratum 1+ because it does have to make procedural decisions as does google maps when you miss a turn, it recalculated routes and does time to route modeling.

But the question is not if, only when they become smarter because like humans, each developmental level is a rewrite (some ways) of the code.

mike

Leave a Reply

Your email address will not be published. Required fields are marked *