Categories
2019 November

Palace agrees rights defender who offered Robredo ‘drug war’ advice should stay out – Philstar.com

I think about this situation continually and try to observe rather than judge the perspectives people have about this process.

It’s not easy to think about because everybody’s right in some way based on their bias.

At the root of this issue seems to be motives and their emergent values and principles.

I can stand on both sides and understand the rational for actions.

Recently the female Vice President (the person garnering the second most votes in the election for president) has been appointed by the president (most likely in a moment of frustration and hubris) to lead the drug war.

I’m not certain this is going to turn out well as the logic in the appointment and the rational being used for the drug war.

On a side note, petty crime in the USA is increasing, while violent crime Is decreasing.

Petty crime is born largely on the shoulders of the poor and petty crime in the Philippines is RAMPANTLY out of control.

It seems that law enforcement is beginning to allow petty crimes because our society has developed an antidote which is spread across the population in the form of insurance.

There is almost no insurance for the poor so petty crime punishes the poor—yet few if anyone looking at the drug culture tries to understand this effect.

In ph, petty crime is rampant because it can be, the cultural system has not evolved nodal DQ-BLUE…and therefore to continue to erode the source of petty crime using CP-RED appears a rational approach.

I don’t know the costs but I suspects the costs of evolving nodal blue are significant both in societal terms and in practical economics because of the exiting bo-purple and entering cp-red memescape.

One thing that is interesting to me as a developmentalist is the notation Graves made in his research noting that the real line that separated BO-PURPLE from CP-RED was a leap in intelligence, I assumed he meant G, as opposed to C (cultural, or collective).

The other line that I drew that augmentation both physical and digital is between FS-GREEN and GT-YELLOW.

Graves didn’t say there was a similar line like that between BO and CP, but everything I can put together and observe is that there is…if nothing else—metasystematicity, which is probably correlated and might not be caused by an intelligence increase. A lot of research, including the folks at HayMcBer that developed an EI (ECI) used to point out before greening PC was that a threshold of 110-120 (which many indicate in past studies is required for management) before EI could be learned and applied. I believe that no such reference will be found now in the presence of the greenscape.

 

Back to the ph evolution which is occurring rapidly at the top but much more slowly in the body of the evolving meanscape;)

The question that comes up for me is the increase in petty crimes correlated to drug use?

I haven’t researched this but it seems looking at the Westerb Culture where is becoming permissible to use “drugs” and the ph culture where it is not.

In either case, it is the poor who are being punished I suspect—this is in the complex quadrant of problems and ANY solution I’ve seen is partial, including tolerance, or intolerance.

I think what is happening in ph is a solution that is suboptimal which also includes the intervention by cultures who have the problem as of yet unsolvable…only tolerated.

https://www.philstar.com/headlines/2019/11/12/1968193/palace-agrees-rights-defender-who-offered-robredo-drug-war-advice-should-stay-out/amp/#click=https://t.co/8VUq6fntYo

Palace agrees rights defender who offered Robredo ‘drug war’ advice should stay out – Philstar.com

This screengrab shows Presidential Spokesperson Salvador Panelo.

MANILA, Philippines — The human rights researcher and advocate who offered to come to the Philippines and give Vice President Leni Robredo advice on ending the “murderous” anti-drug campaign should be barred from entering the country, Malacañang said Tuesday.

Phelim Kine, former deputy director of Human Rights Watch’s Asia division, said on Monday that he is ready to come to the Philippine to advise Robredo—who co-leads the government’s Inter-Agency Committee Against Illegal Drugs—on “how to end this murderous ‘drug war.’”

He said his first recommendation to the vice president is to “arrest [President Rodrigo] Duterte and his henchmen for inciting and instigating mass murder.”

This did not sit well with presidential spokesperson Salvador Panelo, who said he does not want Kine to enter the Philippines.

“He has already reached the conclusion that this is a murderous country. Then he said arrest President Duterte,” Panelo said in a mix of English and Filipino.

Dear VP @lenirobredo – my bags are packed and I’m ready to come to the #Philippines to help advise how to end this murderous “drug war.” Meanwhile here is my Recommendation No. 1: Arrest #Duterte and his henchmen for inciting & instigating mass murder https://t.co/adVEP2lTsq https://t.co/FpxxCT7jIn

— Phelim Kine ?? (@PhelimKine) November 11, 2019

When asked who else should be denied entry to the Philippines, Panelo responded: “Anybody who gives a conclusion that there [have] been killings, murders without justification. They have a problem.”

The Palace usually threatens officials and investigators from the United Nations and the International Criminal Court that they will be barred from entering the Philippines if the nature of their visit is for conduction a probe into Duterte’s internationally condemned campaign against illegal drugs.

In August 2018, the Bureau of Immigration held Gill Boehringer, an Australian law professor and human rights advocate, upon his arrival at the Ninoy Aquino International Airport. He  was told that he is on the Immigration’s blacklist “for allegedly joining protest actions and fact-finding missions in the Philippines” and later deported.

In April 2018, authorities detained and then deported Party of European Socialists Secretary General Giacomo Filibeck, whom Akbayan party-list had invited to its congress in Cebu City.

Filibeck had criticized the government’s anti-narcotics campaign and was part of a delegation in October 2017 that called for an investigation into the “drug war.”

The Bureau of Immigration also ordered missionary nun Patricia Fox to leave the Philippines, where she had been working with the poor for nearly three decades.

President Rodrigo Duterte, who has strongly rejected criticisms of his administration’s human rights record, had accused Fox of having a “shameful mouth” and of treating the Philippines like a “mattress to wipe your feet.”

Fox apparently earned the ire of hypersensitive Duterte by taking part in a fact-finding mission in April 2018 to probe reported rights abuses committed by state forces against farmers in the insurgency-plagued region of Mindanao.

She also reportedly met with farmers in Duterte’s hometown of Davao City after they were arrested on charges of possessing explosives.

RELATED: Hours before departure, Sister Fox tells Duterte to listen to the poor

Locsin: Don’t worry, he can’t get into Philippines

Foreign Affairs Secretary Teodoro Locsin on Monday said Kine—whom he called as Robredo’s “retarded retinue”—will be denied entry if he tries to come to the Philippines.

“Don’t worry, he can’t get into the country. We have to spare Leni the moral moronism of those who use her,” he said on Twitter.

Her retarded retinue. Don’t worry; he can’t get into the country. We have to spare Leni the moral moronism of those who useher. https://t.co/MtVbFmG5yY

— Teddy Locsin Jr. (@teddyboylocsin) November 10, 2019

Kine, who is now the director of research and investigations at Physicians for Human Rights, spent 11 years at HRW—one of the international watchdogs critical of Duterte’s anti-narcotics initiative.

During his stint at the human rights watchdog, Kine repeatedly demanded that Duterte and other senior officials involved in the campaign that has led to the deaths of thousands, mostly urban poor Filipinos, be held accountable.

At least 6,847 drug personalities have been slain in anti-narcotics operations since Duterte assumed office in mid-2016, according to government figures.

But the figure is significantly lower than the estimates of human rights watchdogs of as many as 27,000 killed.

RELATED: Government can’t ‘define dead bodies away,’ says HRW

 Gaea Katreena Cabico

 

mike

 

Categories
2019 September

Emergences | Edge.org

I would recommend designers read this piece.

This is the articles ending remark:

“Bob’s point is this is a sense in which the rubber meets the road where taxing corporations, that window has passed. We’ve lost that. They now have more power than individuals do in influencing the political system. So, there’s an example of where the train has left the station. We’re now in a post-individual human world. We’re now in a world that is controlled by these emergent goals of the corporations. I don’t think there’s any turning back the clock on that. We are now in that world.”

This “post individual world” theme is important on many levels especially the meta level.

Years ago in grad school I wrote an article for an ethics assignment—long lost—but the central idea which keeps coming up over those same years is the idea of “personal responsibility.”

I’m reminded of that daily because of the Wild West here in the Philippines, probably the main unconscious reason I like it here;)

No one is going to take care of you in a nation ranked in the 100s for “enforcing agreements.” Its clearly every person for themselves, Buyer Beware, and all that jazz.

As the meta subject, the post individual world means that someone ELSE is becoming “responsible” as time fast-forwards.

And here’s how the SGD (Spiral Gravesian Dynamic) shows up again:

System Dynamic | Responsibility

———————————————

AN-Beige             | me

BO-PURPLE.       | tribe

CP-RED.              | ruler

DQ-BLUE.           | government

ER-ORANGE.      | corporation

FS-GREEN.         | nation-state

GT- YELLOW.     | you? Them? MMe?*

* Mini Me: You can’t sue a globe?

For me these things we can braid together (as HK would say) dictate our complexity:capability and metasystematicity which is forefront in GT-YELLOW as a capability/comolexity(?) are very important now.

In this article, which has as the ground AIs, the figure is no less than this idea of whose responsible and how does accountability flow to them?

This is intriguing because MOST of the assumptions we are living under perpetuate the gap between the two!SEP (Shrieking Explanation Point)

IF metasystematicity (MS) is necessary at GT-YELLOW, what is sufficient?

In this article, I picked up, or rather Siri Knowkedge picked up two words:

https://www.edge.org/conversation/w_daniel_hillis-emergences

Emergences | Edge.org

EMERGENCES

  1. DANIEL HILLIS: My perspective is closest to George Dyson’s. I liked his introducing himself as being interested in intelligence in the wild. I will copy George in that. That is what I’m interested in, too, but it’s with a perspective that makes it all in the wild. My interest in AI comes from a broader interest in a much more interesting question to which I have no answers (and can barely articulate the question): How do lots of simple things interacting emerge into something more complicated? Then how does that create the next system out of which that happens, and so on?

Consider the phenomenon, for instance, of chemicals organizing themselves into life, or single-cell organisms organizing themselves into multi-cellular organisms, or individual people organizing themselves into a society with language and things like that—I suspect that there’s more of that organization to happen. The AI that I’m interested in is a higher level of that and, like George, I suspect that not only will it happen, but it probably already is happening, and we’re going to have a lot of trouble perceiving it as it happens. We have trouble perceiving it because of this notion, which Ian McEwan so beautifully described, of the Golem being such a compelling idea that we get distracted by it, and we imagine it to be like that. That blinds us to being able to see it as it really is emerging. Not that I think such things are impossible, but I don’t think those are going to be the first to emerge.

There’s a pattern in all of those emergences, which is that they start out as analog systems of interaction, and then somehow—chemicals have chains of circular pathways that metabolize stuff from the outside world and turn into circular pathways that are metabolizing—what always happens going up to the next level is those analog systems invent a digital system, like DNA, where they start to abstract out the information processing. So, they put the information processing in a separate system of its own. From then on, the interesting story becomes the story in the information processing. The complexity happens more in the information processing system. That certainly happens again with multi-cellular organisms. The information processing system is neurons, and they eventually go from just a bunch of cells to having this special information processing system, and that’s where the action is in the brains and behavior. It drags along and makes much more complicated bodies much more interesting once you have behavior.

Of course, it makes humans much more interesting when they invent language and can start talking, but that’s a way of externalizing the information processing. Writing is our form of DNA for culture, in some sense; it’s this digital form that we invent for encoding knowledge. Then we start building machinery to do information processing, systems, everything from legal systems to communication systems and computers and things like that. I see that as a repeat pattern. I wish I could say that more precisely, but you all know what I’m talking about when I wave my hands in that direction. Somebody will someday make wonderful progress in finding a way of talking about that more precisely.

There’s a worry that somehow artificial intelligence will become superpowerful and develop goals of its own that aren’t the same as ours. One thing that I’d like to convince you of is that I believe that’s starting to happen already. We do have intelligences that are superpowerful in some senses, not in every way, but in some dimensions they are much more powerful than we are, and in other dimensions much weaker. The interesting thing about them is that they are already developing emergent goals of their own that are not necessarily well aligned with our goals, with the goals of the people who created them, with the goals of the people they influence, with the goals of the people who feed them and sustain them, goals of the people who own them.

Those early intelligences are probably not conscious. It may be that there’s one lurking inside Google or something. I can’t perceive that. Corporations are examples. Nation states are examples. Corporations are artificial bodies. That’s what the word means. They’re artificial entities that are constructed to serve us, but in fact what happens is that they don’t end up serving exactly the founders, or the shareholders, not the employees that they serve, or their customers. They have a life of their own. In fact, none of those entities that are the constituents have control over them. There’s a very fundamental reason why they don’t. It’s Ashby’s Law of Requisite Variety, which states that in order to control something, you have to have as many states as the thing you’re controlling. Therefore, these supercomplicated superintelligences, by definition, are not controllable by individuals.

Certainly, you might imagine that the head of Google gets to decide what Google does, especially since they’re the founder of Google, but when you talk to heads of state or things like that, they constantly express frustration that people imagine that they can solve this problem. Of course, shareholders try to influence and do influence corporations, but they have limited influence.

One of the interesting things about the emergence of them having goals of their own is the emergent goals often tend to successfully see those influences as sources of noise, or something like that. For example, before information technology, corporations couldn’t get very big because they just couldn’t hold together.

BROOKS: What about the East India Company?

AXELROD: Or China.

HILLIS: I would say that East India Company did not as effectively hold together as an entity and stay coordinated. They can be big, but I don’t think that they were as tightly coupled.

Information technology certainly made it much easier. I won’t quibble with you whether they were edge cases, but you could have skyscrapers full of people that did nothing but hold the corporation together by calling up other people in the corporation.

These things are hybrids of technology and people. As they transitioned to a point where more decisions were being made by the technology, one thing they could do was prevent the people from breaking the rules. It used to be that an individual employee could just decide not to apply the company policy because it didn’t make sense, or it wasn’t kind, or something like that. That’s getting harder and harder to do because more of the machines have the policy coded into it, and they literally can’t solve your problem even if they want to.

We’ve got to the point where we do have these superpowerful things that do have big influences on our lives, and they’re interacting with each other. Facebook is a great example. There’s an emergent property of Facebook enabling conspiracy theory groups. It wasn’t that Zuckerberg decided to do that or anybody at Facebook decided to do that, but it emerged out of what their business model was. Then that had an impact on this other emergent thing—the government—which was designed for dealing with people, not corporations. But in fact, corporations have learned to hack it, and they’ve learned that they can use their superhuman abilities to track details to things like lobbying and track details of bills going through Congress in ways that no individual can. They can influence government in ways that individuals can’t. More and more, government is responding to the pressures of corporations more successfully than to the pressures of people because they’re superhuman in their ability to do that, even though they may be very dumb in some other ways.

One of their successes is their ability to gather resources; to get food from the outside world, for example. They have been extremely successful at gathering resources to themselves, which gives them more power. There’s a positive feedback loop there, which lets them invest in quantum computers and AI, which gets them presumably richer and better.

We may be already in a world where we have this runaway situation, which is not necessarily aligned with our individual human goals. People are perceiving aspects of it, but I don’t think what’s happening is widely perceived. What’s happening is that we have these emergent intelligences. When I hear people do this hypothetical handwringing about these superintelligent AIs that are going to take over the world, well, that might happen some time in the future, but we have a real example now.

Why don’t we just figure out how to control those, rather than thinking hypothetically how we ought to design the five laws of robotics into these hypothetical general AI human-like things? Let’s think how we can design the five laws of robotics or computers into corporations or something like that. That ought to be an easier job. If we could do that, we ought to be able to apply that right now.

* * * *

ROBERT AXELROD: An example of that is, what rights do they have? The Supreme Court recently said they had the right to free speech, which means they can contribute to political campaigns.

ALISON GOPNIK: David Runciman, who is a historian at Cambridge, has made this argument exactly about corporations and nation states, but he’s made the argument—which I think is quite convincing—that this is from the origin of corporations and nation states, that it’s from industrialization, that that’s when you start getting these agents.

Then there are some questions you could ask about whether you had analogous superindividual agents early on. Maybe just having a forager community is already having a superintelligence, compared to the individual member community. It’s fairly clear that that kind of increased social complexity is deeply related to some of the things that we more typically think of as being intelligences. We have a historical example of those things appearing and those things changing the way that human beings function in important and significant ways.

For what it’s worth, at the same time, the data is that individual human goals got much better on average. You could certainly argue that there were things that happened with industrialization that set back.

AXELROD: What do you mean goals got better?

GOPNIK: Well, people got healthier.

AXELROD: They achieved their goals.

GOPNIK: Yes, exactly. They stopped having accidents. They stopped being struck by lightning. Someone like Hans Rosling has these long lists that are like that. We do have a historical example of these superhuman intelligences happening, and it could have been that people thought the effect was going to be that individual goals would be frustrated. If you were trying to graze your sheep on the commons, then you weren’t better off as a result, but it certainly doesn’t seem like there’s any principle that says that what would happen is that the goals of the corporations would be misaligned.

  1. DANIEL HILLIS:It’s a matter of power balance. Certainly, humans aren’t powerless to influence those goals. We may be moving toward tipping the balance, because a lot of technological things have helped enable the power of these very large corporations to coordinate, and act, and gather resources to themselves more than they’ve enabled the power of individuals to influence them.

RODNEY BROOKS: Back to the East India Company: I realized when I said that that in fact the East India Company did develop an information technology and became the education system through elementary schools of people being able to write uniformly, do calculations, arithmetic. Writing enabled their information technology that individual clerks were substitutable across their whole operation.

HILLIS: The East India Company did some pretty inhuman things.

NEIL GERSHENFELD: Al Gore said he viewed the Constitution as a program written for a distributed computer. It is a really interesting comment, that if you take what you’re saying seriously to think about what is the programming language.

STEPHEN WOLFRAM: It’s legalese. Programming language is legalese.

CAROLINE JONES: That the algorithms of homophily are a huge part of the problem. The reputed echo chamber that magnifies small differences so you get conspiracy theories—the schizophrenic model is hyper connectivity. Everything connects to this conspiracy theoretical model, so homophily, as I learned from Wendy Chun, is at its core of the programming language—like begets like—as distinguished from the parallel study in the ‘50s of birds of a feather don’t flock together; difference attracts. These were two models in the ‘50s that were at the core of this game theoretical algorithmic thinking, and everyone went with like begets like, which produces the echo chamber.

The first question is about hybridity. The DNA model has been radically complicated by translocation. So, it’s not the case that there are perfect clones. You mentioned nine out of ten E. coli, but there’s the one tenth, which has information from the chimeric gene that I have floating around me from my son when he was passing in my amniotic fluid, whatever. There’s translocation going on all the time.

In other words, do we have a resource there in this ongoing hybridization of the program? Do we have a resource point of inflection? To Bob’s rights comment, we also are giving rights, not “we,” but the Bolivian constitution is giving rights to the ocean, to a tree, to cetaceans. So, can this dialogue with other life forms, with other sentiences somehow break the horrifying picture of the corporate superintelligence? Are there other translocatable informational streams that can be magnified or the algorithms be switched to proliferate differences and dialogue and external influences rather than the continuous proliferation of the self same?

HILLIS: I don’t think it’s necessarily horrifying, because I don’t think we have no influence over this. I agree that this has been going on for a long time.

JONES: But we do have the model of a government being put in place by algorithms that we no longer control demographically. We have an actual case.

HILLIS: The trend is very much in the direction of the next level of organization, which is corporations, nation states, and things like that taking advantage of these effects, like symbiosis.

WOLFRAM: That’s called strategic partnerships.

HILLIS: Exactly. Yes, it is, or acquisition of genetic material is done by acquisition. They have lots of ways of taking advantage of hybridization that is better than individuals. In fact, the technology has hurt the individual interactions, as you point out, with the way that it’s played out and, in many ways, harmed it. It’s helped it in some ways.

It’s been a mixed bag, but it’s definitely enabled the corporations because corporations before were limited just by the logistics of scale. They became more and more inefficient except in very special cases. They couldn’t hold together as they got bigger. Technology has given them the power to hold together and act effectively bigger and bigger, which is now why we’ve just gotten in the last year the first two trillion-dollar companies because they were designed from the beginning to take good advantage of technology.

PETER GALISON: Do you think that there’s a characteristic difference between the kind of research that goes on under the corporate umbrella and, say, the university umbrella? I know people have lots of views about this, and there are things you can do in university that you can’t do in one or the other, but how would you characterize in particular areas of AI-related work?

HILLIS: Corporations are much more rationally self-interested in how they focus their research.

AXELROD: You mean they’re allocating resources more efficiently? They’re more effective at promoting promising research areas? Is that what you’re suggesting?

HILLIS: They select research areas that are in alignment with their emergent goals.

BROOKS: Yes, but they’re doing an additional thing now, which is very interesting. They’re taking the cream from the universities, offering them very open intellectual positions as a way of attracting the level below who will be more steerable to what they do. So, Google and Facebook are both doing this in the extreme at the moment. Those particular people will tell you what great freedom they have.

HILLIS: I’d say that’s a great example of them being very smart and effective at channeling the energy toward their emergent goals.

WOLFRAM: As you look at the emergent goals of corporations, it’s difficult to map how the goals of humans have evolved over the years, but I’m curious as to whether you can say anything about what you think the trend of emergent goals in corporations is. That is, if you talk about human goals, you can say something about how human goals have evolved over the last few thousand years. Some goals have remained the same. Some goals have changed.

AXELROD: I’ll try my hand at it. When you get two corporations in the same niche that are competitive, they often become uncompetitive. If one of them is substantially bigger, they might try to destroy or gobble up the other one, but otherwise it might try to cooperate with the other one against the interest of the consumer. It’s called anti-trust.

As they get bigger, they also want to control their broader environment like regulations. A small restaurant is not going to try to control the regulation of restaurants, but if you have a huge chain, then you can try to control the governmental context at which you are, and you could also try to control the consumer side of it, too. Advertising is a simple way to do that. As the corporations get bigger, there’s an unfortunate tendency that the industrial competition goes down, and we see this in high tech. It’s very extreme.

There are only five huge corporations and they’re doing different things. Apple is doing manufacturing and Amazon is not doing much manufacturing. That’s likely to continue not just in the high-tech areas, but in others. It’s very worrisome that the corporations will get more and more resources to shape their own environment.

At the lower level—at a restaurant or something—you have two goals: make money for your owners and survive. But when you get much bigger it seems to me that often the goals beyond those two are to also control as much of your environment as you can.

WOLFRAM: For the purpose of stability or for further growth.

AXELROD: For both. There’s another trend that’s correlated with this, which is the concentration of capital. At the individual level, you see a higher and higher proportion of the wealth of a country is in the top one percent.

HILLIS: That’s a symptom of them getting more powerful.

AXELROD: Maybe. It’s a symptom of the returns on capital greater than the growth of productivity, which doesn’t depend so much on the level of organizational structure. So, the corporations are likely to have more and more control over resources, and that’s unfortunate. It’s a very risky thing.

WOLFRAM: So, it’s virtues and vices of corporations. Do you think the corporations will emerge with the same kinds of virtue and vice type goal structures that are attributed to humans?

GEORGE DYSON: One thing that is very much Danny’s work, and that he didn’t say, is that the world we inherited from the 1940s that brought the first Macy Conference, the huge competition was in faster computers, to break the code within 24 hours, to design the bombs. These were machines just trying to get more instructions per second.

But there’s another side to it. There’s slow computing that in the end holds the survival of the species, and that’s where the immune system is so good because of very long-term memory, and we need that too. We don’t just need the speed. Danny, of course, is building the 10,000-year clock, a very slow computer, and that’s an important thing because when you have these larger organizations, these superorganizations you’re talking about, they scale not only in size and distance but in time, and that’s a good thing—or it can be a bad thing, too. You can have a dictator that lasts for a thousand years.

GOPNIK: But some organizations don’t scale. Even when they get bigger, they seem to have this very predictable life. That’s what people like Geoffrey West would say.

  1. DYSON: Right. Geoffrey will say that. But a very important, possibly good, function of these systems is we’re going to get longer-term computing where you look at the very long-term time series. That evolution will be a good thing.

GALISON: Historically, we have places like AT&T, IBM, Xerox that had world-class labs that deteriorated over time. AT&T Laboratories is nothing remotely like what it was like in the 1960s and ‘50s, and they expelled a lot of research eventually because it wasn’t short-term enough for them, and they figured they’d offload that to the universities and then take the fruits of it and do things that were more short term.

One possible outcome is that even the places where they’re hiring people at a high level and giving a tranche of the research group relative freedom as a cover and attractor, one outcome is that that could expand, but it could also pull back, and you could end up with wrecking parts of the university and not having a lot of freedom in the corporation. I don’t know. It seems to me an open question what’s going to happen with this concentration of research wealth at a few companies.

BROOKS: The wealth is the important part. When AT&T labs was riding high, AT&T was a monopoly of the phone company, an incredible cash flow.

FRANK WILCZEK: They were required by law to spend money.

WOLFRAM: But the fact is, basic research happens when there’s a monopoly, because if you have a monopoly then it’s worth your while to do basic research because whatever is figured out will only benefit you. You see that even at the level of the U.S. government.

JONES: Did you hear Frank’s comment that AT&T was required by the government to do research?

WILCZEK: They were required by law to keep their profits at a certain level, so they spent a lot on research.

JONES: A monopoly will never regulate itself.

WOLFRAM: Even in our tiny corners of the technology world, it’s worth our while to do research in things where we are the only distribution channel basically, and the same thing is happening with a bunch of AI stuff that’s being done in places where the only beneficiary is a company with a large distribution channel that there’s motivation to do basic research there. As soon as you remove that monopoly, the motivation to do basic research goes away from a rational corporate point of view.

TOM GRIFFITHS: There are cases where you can tie this very directly to AI. The best example of this is the Facebook feed management algorithm. Nick Bostrom has this thought experiment where you make an AI whose goal is to manufacture paperclips, and then it consumes the entire earth manufacturing paperclips. Tristan Harris has pointed out that the Facebook feed management algorithm is essentially that machine, but for human attention. It consumes your attention. It makes money as a consequence of doing so that’s fed back into the mechanism for consuming human attention. It gets better and better at consuming human attention until we’ve paper-clipped ourselves.

SETH LLOYD: That’s true for all of these companies. Anybody who has teenage children knows that there’s an attention problem.

GOPNIK: I would push back against that. That idea is highly exaggerated and let me give you the reason why I think that.

Think about walking or driving down a street where there billboards all around, if you were in a first-generation literate culture, what you would say is, “There’s this terrible problem: As you go down the street, you’re having your attention distracted by having to decode what this stuff is. There are all these symbols you have to decode. Meanwhile, you’re not paying attention to anything that’s going on in the street. Your attention is terribly divided.” We know even neurologically that what actually happens is when you are deeply immersed in a literate culture, you end up with Stroop effects, where your decoding of print isn’t attention-demanding in the same way. You’re not doing it by serial attention anymore. In fact, you’re doing it completely automatically and in parallel. It’s something that we all worry about because we’re in the position of the preliterate person. It’s not at all obvious that this is somehow an intrinsic characteristic.

HILLIS: I’d like to bring this back to the AI part of the comment rather than the social part of the comment. If you look at where artificial intelligence is being deployed on a large scale, where people are spending a lot of money paying the power bills for doing the computation and things like that, they are mostly being done in the service of either corporations or nation states—mostly corporations, but nation states are rapidly catching up on that.

They are making those more powerful and more effective at working their emergent goals, and that is the way that this relates. So, when we think of these runaway AIs, we should think of them as not things off by themselves. They’re the brains of these runaway things that are already hybrid AIs. So, they’re the artificial brains or the artificial nervous systems of these things that are already hybrid AIs and already have emergent goals of their own.

LLOYD: This is why I disagree with you about this. Back in the 1960s, they would say, “Oh, kids these days, they’re watching TV five hours a day. It’s just horrible.” Though I enjoy preparing for the grumpy old man stage of my life, and I like practicing that, I do think that if you look what these AIs are being devoted for, the primary use of them is to get people’s attention to web pages.

HILLIS: Whether it’s attention, or dollars, or votes, it almost doesn’t matter.

JONES: The designers will tell you that they’re using the lowest brainstem functions. That’s part of the problem. They’ll tell you they’re racing to the bottom of the evolutionary channel as quickly as they can.

HILLIS: If there’s anything valuable that is valuable to them, they will use this power to get it. There will be problems with that, and there will be limits on that and so on—you’re pointing out some of the limits in getting attention—and there will be limits in their ability to get money, and their ability to get electric power and so on, but they will use all of these tools to get as much of it as they can.

GOPNIK: But again, Danny, my challenge would be, is that any different than it was for Josiah Wedgwood in 1780?

HILLIS: Yes. It’s a tip in power.

GOPNIK: It seems to me you could argue there was much more of a tip in power if you’re considering the difference between being around in 1730 and 1850.

HILLIS: For example, for the East India Company, they couldn’t establish a policy and monitor that everybody did that policy. Google can. Google can do that.

GOPNIK: That’s exactly what people at Wedgwood did. That was part of the whole point of investing industry, inventing factories was exactly doing that.

HILLIS: But in fact they couldn’t do it very effectively.

JONES: East India had to translate itself to a language with an army, which was the British Empire. So, there are meshes between corporations and governments that we have to worry about, like the one we have right now.

GOPNIK: No. I’m not saying that we don’t have to worry about that or there isn’t power. The question is why is it that you think that this is a tipping point? It looks like there’s this general phenomenon, which is that you develop these transindividual superintelligences, and they have certain kinds of properties, and they tend to have power and goals that are separate. All that’s true but we have a lot of historical evidence, and it might be that what’s happening is that there’s more of that than there was before. But why do you think that this is a point at which this is going to be different?

HILLIS: There could be a tipping point. I’m not sure exactly now. What I am saying is that there’s an explosion of their intelligence. These explosive technologies, which are driven by Moore’s law and things like that, are being used to their advantage. There are very few examples where they’re being used to an individual’s advantage. There are lots of examples where they’re being used to the advantage of these hybrid emergent intelligences.

LLOYD: That’s a very good example, because between 1730 and 1850 the life expectancy and degree of nutrition and height of the average person in England declined because they were being taken out of the countryside and locked into factories for ninety hours a week.

GOPNIK: That’s why thinking about these historical examples is helpful. If you think about the scaling difference between, say, pre-telegraph and train, so if you think about the difference in scale between the communication that you could have before you had the telegraph and afterwards and before you had the train and afterwards, for all of human history the fastest communication you could have was the speed of a fast horse.

HILLIS: Yes. It made a big difference.

GOPNIK: Then suddenly you have communication at the speed of light. It seems to me there’s nothing that I can see in what is happening at the moment that’s different.

HILLIS: I realize what our difference is. I think of that as now. When I’m saying this is happening now, I’m including railroads and telegraph. This moment in history includes all of that, so that’s the thing that’s happening right now.

GOPNIK: That’s essentially industrialization.

HILLIS: I’m not categorizing it. Industrialization focuses on the wrong aspect. A lot of things happened at once and you categorize them, but the particular thing that is interesting which happened at the same time as industrialization was the construction of an apparatus of communication of symbols and policies that was outside the capacity of a human mind to follow it. That’s the interesting thing. There are many other aspects of industrialization, but that’s the thing that’s happening now, and computers and AI are just that going up on an exponential curve.

GALISON: Seeing this moment of increased poverty and stagnation of wages for a big sector of society, and enormous increase of wealth within a concentrated group, and the consolidation of industries like Amazon and others is something that does represent the sharp edge of that increase. It’s not just a simple linear continuation of what went before.

In the post-World War II era, there was a sense that people were able in families to go to college for the first time, to get loans—at least if they were white—and that meant that you had a big class that had increased expectations and increased income. We’re seeing the echoes of what happens when that stops when you’re basically not bringing new people into the college system. You’re not giving them increased stakes and homes and real estate and things that increase in value. We’re at a tough moment.

GRIFFITHS: There’s an interesting argument about something that’s different, which is one argument that’s often made by the technology companies is we’re not doing anything different. This is something that’s been done in the past, and we’re just doing it better, but there is a case that you could make that doing it better is different. The objective function is the same, but you’re doing a better job of optimizing it, and one consequence of that is that you get all of the unforeseen consequences of doing a good job of optimizing that objective, which may not have been clear when you were doing a bad job of optimizing that function.

In machine learning we talk about regularization. Regularization is forces that pull you back from overfitting on your objective, and you can think about not being able to do a great job of optimizing as a form of regularization, but it’s helping us to avoid all of the negative consequences of really optimizing the objective functions that those companies have defined for themselves.

GALISON: They say we’re doing the same thing, but they also say we like to break stuff, and breaking stuff often means breaking the income of working-class people.

GRIFFITHS: Yes, but it’s enough that doing the same thing better is the thing that then reveals why it’s bad to do that thing.

HILLIS: If you go back to the other perspective and say, “Is a single cell better off being a part of a multi-cellular organism that they can’t perceive as living in a society that they can’t perceive?” I would argue that it’s a mixed bag, but generally they are.

GOPNIK: Right. That’s right.

HILLIS: So, I’m optimistic in that sense.

GOPNIK: If you think of the train and telegraph is the inflection point, the individual achievement of goals didn’t just get better but got exponentially better.

HILLIS: Again, I’m not seeing that as an inflection point. We’re going through a transition. We’re in the middle of a transition from going from one level of organization to another level of organization in that process. For instance, individual cells had to give up the ability to reproduce. They had to delegate it.

WILCZEK: That’s a lot.

HILLIS: We will lose some things in that process. We’ll gain some things in that process. But all I’m mostly arguing for is that we’re spending too much time worrying about the hypothetical; it’d be better to look at the actual.

FREEMAN DYSON: The most important thing that’s happening in this century is China getting rich. Everything else to me is secondary.

IAN MCEWAN: One aspect of humanizing let’s call them robots, AI, whatever you like, would be to tax them as humans. Especially when they replace workers in factories or accountants or white-collar jobs and all the pattern recognition professions. Then we would all have a stake.

AXELROD: That’s an example of where we may have passed the tipping point. The corporations are now politically powerful enough to keep their tax rates low and not only that, but the billionaires are powerful enough to keep their tax rates low. Inheritance tax, for example

MCEWAN: This is why we need to resist the point at which, perhaps in fifty years’ time, vast sections of the population are only going to be working ten or fifteen hours a week, and we might have to learn from aristocracies of how to use leisure: how to hunt and how to fish, how to play the harpsichord. In other words, it’s perfectly possible that anyone who speaks of retirement—and we were talking about this in a break—how busy you could be doing nothing. But somehow, we have to talk of distributing wealth and function here.

HILLIS: Bob’s point is this is a sense in which the rubber meets the road where taxing corporations, that window has passed. We’ve lost that. They now have more power than individuals do in influencing the political system. So, there’s an example of where the train has left the station. We’re now in a post-individual human world. We’re now in a world that is controlled by these emergent goals of the corporations. I don’t think there’s any turning back the clock on that. We are now in that world.

mike

 

 

Categories
2019 August

Anger over tariffs obscures a shift in patterns of global trade | McKinsey

“The geography of global demand has shifted radically, according to a comprehensive report by the McKinsey Global Institute. The developing world accounted for less than 20 per cent of global consumption in 1995. Now that share is up to nearly 40 per cent and on a trajectory to top 50 per cent by 2030. These new global consumers are creating major export opportunities. Companies in advanced economies sold more than $4tn worth of goods to the developing world in 2017. Digital e-commerce marketplaces with global reach are opening the door for more small and medium-size manufacturers to capture a slice of this growth.”

https://www.mckinsey.com/mgi/overview/in-the-news/anger-over-tariffs-obscures-a-shift-in-patterns-of-global-trade

——

This also points back to the club of Rome data on the pollution and population curves.

As global consumption increases so does pollution and not at the same rates!

It all points to a humane problem of who gets what, when, where, why and how.

Ships have no pollution controls and no one says a word about it, yet they pump millions of pounds of pollutants into the air annually…

The STRUCTURAL SHIFTS occurring are going to be hard to swallow for a lot of people!

Anger over tariffs obscures a shift in patterns of global trade

February 26, 2019

While US President Donald Trump seems to have temporarily soothed trade tensions with China by delaying a planned increase in tariffs on $200bn of Chinese goods, the threat of levies on US imports of foreign cars and car parts remains. And America’s trading partners look ready to retaliate.

Governments that once carried the banner for free trade are now retreating into protectionism. But this is precisely the wrong moment for economies to turn inward — particularly advanced ones.

Over the past decade, globalisation has undergone little-noticed but profound structural shifts that are tilting the playing field in favour of advanced economies. The US, UK and countries across Europe all stand to gain in globalisation’s next chapter — if they don’t slam the door prematurely.

Output has continued to rise but the share of goods traded across borders has fallen sharply. This decline has nothing to do with the recent trade wars. Nor does it mean that export markets are drying up. In fact, it reflects healthy economic development in China and other emerging markets. More of what gets made in these countries is now consumed locally instead of being sent to advanced economies.

The geography of global demand has shifted radically, according to a comprehensive report by the McKinsey Global Institute. The developing world accounted for less than 20 per cent of global consumption in 1995. Now that share is up to nearly 40 per cent and on a trajectory to top 50 per cent by 2030. These new global consumers are creating major export opportunities. Companies in advanced economies sold more than $4tn worth of goods to the developing world in 2017. Digital e-commerce marketplaces with global reach are opening the door for more small and medium-size manufacturers to capture a slice of this growth.

While trade in goods has flattened, services and cross-border data flows have become the real connective tissue of the global economy. Some types of services trade — IT services, business services and intellectual property royalties — are growing two or three times faster than trade in goods. From design to marketing, services also account for 30 per cent of the value of exported goods. Collectively, advanced economies run a trade surplus in services of $480bn, twice as high as a decade ago. They are well-positioned to capture future growth in areas such as entertainment streaming, cloud computing, remote healthcare and education.

All industry value chains, including those that produce manufactured goods, now rely more heavily on research and development and innovation. Spending on intangible assets such as brands, software and operational processes has more than doubled relative to revenue over the past decade. This bodes well for Europe, the US and other advanced economies with highly skilled workforces and strong intellectual property protections.

Most people formed their opinions about globalisation during the wave of offshoring in the 1990s and early 2000s, when factories shuttered in advanced economies and manufacturing migrated to the developing world. Today, the labour arbitrage game appears to be coming to an end. Only 18 per cent of today’s goods trade now involves exports from low-wage countries to high-wage countries. That’s a far smaller share than most people assume — and one that’s declining in many industries.

Automation and artificial intelligence technologies will continue to make labour costs a less important factor when companies decide where to invest in new plants. Factors such as infrastructure, workforce skills and, especially speed to market, are weighing more heavily in the equation.

All of this could produce a movement away from offshoring, enabling advanced economies to recapture a bigger share of share of the world’s production — albeit in a more digitised form. This type of manufacturing will not put millions to work on assembly lines, but it does support better-paying and more highly skilled jobs.

The shifts occurring in globalisation today reflect what companies are already doing. But policymakers have been slow to recognise these tailwinds, in part because Europe and the US are still confronting the legacy of the last era of globalisation. Many of the workers and communities that suffered when western manufacturing moved to low-wage countries years ago have soured on the idea of global trade. But the solutions they need involve bolder domestic policies and reinvestment — not barriers that threaten to seal off the most promising avenues of growth in the decade ahead.

 

This article appeared first in Financial Times.

About the author(s)

Susan Lund is a partner at the McKinsey Global Institute. James Manyika, chairman of the McKinsey Global Institute, contributed to this article.

 

mike

Categories
2019 April

Back to time span?

I’ve been thinking more and more about the nature:nurture issue around the chicken egg situation if time span.

I think we have tended to rule it out through our discussions but…

Time span really matters until it doesn’t!

My point of context is the confounding nature of short-term thinking…or not as u might place it for consideration.

Here in ph, I watch the workers focus on immediate concerns without any, I mean any concern about the future of the work.

This is interesting as it creates only simple problems “usually.”

The longer the time span of concern, problems take on a lot of complexity—social security as an example.

What comes up for me is the idea of time span and all it’s tentacles.

Can time span happen before people do?

In other words (sorry it takes so long to objectify what is subject;)…

Can we teach and learn time span?

I don’t think so generally.

We can teach it and learn it as a part of KSEs in a context, but then—as we think about it—it’s not really time span—the capability of generalising a series of events.

For me, I see it 1000s of times as our people here can do cummulative reasoning.

So we can teach them a time span-like set of rules but as soon as context shifts they can’t apply the rule (indicating a lack of cumulative reasoning in my book).

Even when I say (cue), do you remember what the rule is there…

Still they can’t apply it until you recontexualize the rule to the different context.

This capability/ability? to pass information between related or unrelated contexts, especially as it relates to time span are important to examine.

We confuse KSEs with capability.

Good example is verbal abiliity.

I see highly articulate people continuously over-rated for general capability call it “gc”.

There is a shallowness in verbal ability that confuses people about gc.

In the same vein, time span is easily discussed and confused for “marshmallows.”

And back to the idea of chicken egg and time span.

I’m starting to back towards jaques ideas as he must have seen the phenomenon that haunts me here in my work around the world.

Using time span to predict gc.

Btw, I’m interested in gc because KSEs are evolving so rapidly in VUCA.

In other words adaptiveness, which some have coined “agility” cause it sounds cool related much more to gc than to KSEs?

Again, is time span (NOT THE ABILITY TO POSTPONE GRATIFICATION) born in?

In trying to solve problems in the environment of poverty, is it limited natural time span necessary for survival in A conditions? (SGD = AN-BEIGE).

It takes a lot of gc for time span generating reasoning to evolve past AN-BEIGE environments because survival means that you take care today, and tomorrow will be another thing to take care of when it becomes today—thus the time span is geared to the conditions?

I realise that this generally my struggle I’m discussing but there maybe relevance outside of that?

As society moves with the most adaptive, I wouldn’t say agile because to me that is a reserved set of capabilities that are not necessarily correlated with gc, more so personality which is born in; like AG = Anticipatory Guidance.

That’s fodder for different times.

So the idea which is forming comes back to graves in that conditions are best responded to by neurology?

Yet, once cues, scaffolding, support and lift (CSSL) are in place KSEs that CSSL individual, group and organisational behavior CSSL those that can and will.

Let me give a story.

Remember what San Fran tried to do with schools and it failed largely because “class-driven” supports are more robust than higher principles of say fairness, justice, discrimination.

If you have spent time among classes, you know that they are VERY DISCRIMINATING and prejudiced, and complain of same.

This idea then, of conditions and neurology holds the dissipating structures as organisational closed, yet energetically open!

This means that enormous amounts of CSSL can be absorbed without the closed organisation changing to accommodate , let’s say VUCA.

There is a veiled meaning here which can be applied to ideas like time span and without more gc available, change is thwarted, and problems as viewed superficially as “poverty” as solution resistant, as is time span which seems durable even in the face of contextual KSEs.

This is why large numbers of ph people will remain in poverty and the question is…do people need to develop over time, vs the Tony Robbins method.

Then…

Is dignity most served by honouring that development incrementally or trying to “force” people to take on the most complex memes.

This is my beef;) with the change people and the question that was asked in the Integral Interview…

We are attempting to teach pigs to sing and advancing those who show any proclivity—which means not allowing strong foundations to occur to hold up under VUCA.

The question needs reformatted imho.

mike