We Already Know a Lot About Apple’s September 12 Event, and Apple Likes That

2018 iPhone mockups

There was a time when Apple, especially Steve Jobs, would spring a surprise on us at an event, and we were delighted. Times are too complex for that now.

2018 iPhone mockups
Intentional leaks help us size up the 2018 iPhones.

The Particle Debris article of the week is from engadget.

I cite that article not for its drama, depth of analysis, or excellence in tech writing, although it’s an excellent, comprehensive article. No, I point to it because it nicely encapsulates everything we think we know about what Apple will present during its September 12th event.

And that’s fairly amazing at this point.

Of course, this article could be wrong in places. We’re never absolutely sure in advance what Apple will roll out. Plus, the Apple echo chamber can create artificial supporting evidence. Even so, over time, multiple, credible sources start to paint a picture that stitches together a reasonable new product story.

How Does Apple Feel About Leaks?

My theory about this annual oozing of intel into the Apple community is that it is intentional. The reason is that the pace of life no longer allows customers to watch a two hour event, digest the implications, mentally integrate the new products into their lives (and finances), and then rush into an online purchase in the middle of the night.

Instead, Apple warms us up by degrees. We start to get our heads around all the new products for weeks in advance. Piecemeal, we come to understand why we may wish to spring for one of the new iPhones. Or Apple Watch. Or iPad (Pro).

This process effectively primes the pump of our imagination, informs us, and brings us around to a tentative purchase decision.  When the event actually takes place, we hear the news, and we’re full of confidence. Then we can “favorite” our choice in the online store so we’re ready on order day. It’s a process.

This serves also to ensure dramatic, initial demand for all the new products. There’s nothing like finding out that everyone else is eagerly informed and on the bandwagon to make us technically insecure and then spur us into action.

This is why I believe Apple leaks the juiciest details early. It’s the modern, Tim Cook approach for modern, hectic times.

Next Page: The News Debris for the week of August 27th. Tech, AI, human irrelevance and tyranny.

8 thoughts on “We Already Know a Lot About Apple’s September 12 Event, and Apple Likes That

  • So many things to say in response to Yuval Harari’s referenced article. I’ll just ramble around about in a somewhat haphazard hit-or-miss rumination…

    1. Other than attention-grabbing pronouncements of Ray Kurzweil and his brethren, no one has really proved the inevitability of the Singularity, capital S –the day when machine intelligence supplants human intelligence, rendering the latter redundant, sub-optimal, and dispensable.
    Most of the writing and commentary I’ve read about AI falls into two categories. One that says AI can never achieve human-like intelligence for a host of reasons of which the one I think is most critical is that the ability to feel psychic and physical pain and pleasure is an integral part of human intelligence and until someone finds a way for non-organic circuitry to feel pain and pleasure, I’ll treat the Singularity as a pipe dream. The second category assumes that human-like AI is attainable and then imagines what the world would be like after that momentous achievement. This is where we find all the utopian and dystopian AI musings.

    2. Nevertheless, even if we rule out super-intelligent machines, AI presents a new frontier of perils for humanity but it’s not from a race of AI entities doing something bad to us, it’s what some of us might do using AI. Harari’s article talks about the latter. As fearful as the AI future might be, the problem is not AI per se but how people put it to use.

    3. And right now, what most AI deployments are doing, perhaps purposely or inadvertently, is something I’ll call the abdication of moral responsibility. Facebook, Google, and their ilk want to outsource monitoring of its content to algorithms. The financial industry has already outsourced trading decisions to computers. The nascent self-driving car industry wants to transfer the burden of deciding who do I kill when I try to avoid a crash from me to a computer. In short AI allows private citizens and companies (oh, they’re also citizens, btw /s) to convert events that used to result into moral and legal liability for them into force majeure or Acts of God. Don’t blame me, the algorithm did it. This is really nothing new, it’s just the latest incarnation of the highly lucrative practice of privatizing profit while socializing losses.

    4. A necessary, though I doubt sufficient, solution is that the legal system must be updated so that it clearly defines a legally responsible party if an AI system causes injury, not just to individuals but to society as a whole. Of course the problem is the legislative and judicial process is notoriously slow if not downright impotent. Took decades to even begin getting tobacco companies to take some of that privatize profit to compensate for socialized losses. Fossil fuel companies are still successfully socializing losses.

    5. On a different note, the liberal democratic societies have yet to figure out the effect this unprecedented deluge of information from all corners of the globe has on society’s well-being. In general the liberal democracies live by J S Mill’s prescription that is neatly summarized by “the remedy to hateful speech is more speech” i.e. our abiding faith that an open, uninhibited public discussion always leads to society opting for the just and moral choice.

    But let’s look how this used to work in practice before the information revolution. Some crackpot gets on a street corner soapbox and says ‘I think we should kill all pets. They are bad for society.’ Well, his audience is small and there is virtually zero chance that he gets any like-minded adherents which would turn a nuisance idea into a dangerous movement. For any idea to gain mass circulation, it has to go through successive filters from a local street-corner assembly, through ever larger audiences, a town hall meeting say, local media, regional media, and national media. Most of the crazy and antisocial ideas get snuffed out in the process, not by governments, but by normal, sensible people in responsible positions (example: editors, town hall meeting facilitators, etc.) who use their judgement to either say ‘this is garbage’ and shut down that voice or to invite discussion that would counter and neutralize it. The filtering system isn’t always perfect, but it has worked well enough.

    Nowadays, that filtering process is gone. The internet allows any old crackpot or worse, a truly malevolent character, to go to a national audience instantaneously, reach the few like-minded people, and thus become a potent demagogic, negative political force that appeals to base instinct before society’s better angels are able to mount an effective counter-argument that appeals to the deliberative, ethical-logical side of our brain (or mind). Some might notice parallels to a System 1 vs System 2 process here, or mammalian brain versus prefrontal cortex dynamic.

    In short, the information revolution has allowed anti-democratic forces to harness the tools and practices of liberal democracy to reach our baser instincts, and use it to tear down liberal democracy. And AI, a tool that like any other tool, is incapable of making moral judgements, will only make things worse if we allow it to be misused and abused.

    Yeah, it’s too long.

  • I don’t think the leaks were intentional at all. An intentional leak from Apple is a “yep” or a “nope” from Jim Dalyrmple. The leaked photos of the new iPhones and Apple Watch smack of a tired, overworked Apple employee screwing up.

    Old UNIX Guy

  • John:

    Where to begin? Yes, I’m talking about Yuval Harari’s piece in The Atlantic. While I disagree with Harari’s analysis of humankind, whether in ‘Sapiens’, ‘Homo Deus’ or ’21 Lessons’, one virtue of his writings is his intellectual honesty in pointing out both speculation on his part, as well as examples from history that contradict his generally dire predictions. For example, in his Atlantic piece, he writes, “Fears of machines pushing people out of the job market are, of course, nothing new, and in the past such fears proved to be unfounded”. However, he goes on to opine, “But artificial intelligence is different from the old machines. In the past, machines competed with humans mainly in manual skills. Now they are beginning to compete with us in cognitive skills. And we don’t know of any third kind of skill—beyond the manual and the cognitive—in which humans will always have an edge.”

    Let’s dissect this specimen for a moment before taking on his broader theme. First, he says that ‘AI is different’ because now ‘they’, presumably machines powered by AI, are ‘beginning to compete with us in cognitive skills’. Note, ‘beginning to’, and not ‘are competing with’, as in AI currently having comparable skill sets and capacity to humans. Readers of your PD will note that there are many forms of AI, but as Nick Heath’s article you cited last week, as well as many AI-related articles cited in previous PDs have all argued, there are many different forms of AI, and to date, even the best are narrow in ability, capacity and so-called intelligence. None, I’ll repeat, none of the AI or AGI candidate technologies have a broad or general intelligence remotely approaching anything human. Heath cites the example of asking AI about Chinese restaurants, which any AI can look up and proffer, however a human would, beyond that, know additional, related information about China that might inform such choice, including regions, history, geography, culture and their influence on the regional cuisine, such as ‘Hunan’ vs ‘Szechuan’, and beyond that perhaps the current geopolitics that might provide conversation to accompany said meal.

    In other words, intentionally or not, Harari is engaging in a bit of intellectual sleight of hand. We are led to believe that AI provides a fundamentally different challenge to humanity than unintelligent machines, which humans feared for all the same reasons we currently fear the ‘smart’ ones because they are/will compete with us in our chief human asset – cognition – and, since they can perform some tasks faster and better than we can, they will inevitably beat us in all things cognitive. Or perhaps, they will displace us and make us ‘irrelevant’ to the elites (scary people with power over the wretched masses who will control AI but be immune to its effects).

    The sleight of hand is that this is not overtly what he says, but infers by underscoring an as yet unrealised and arguably implausible threat from AI, and then goes onto argue the case as if both the emergence of this powerful AI and our defeat by it are inevitable . In short, AI will outcompete us for our niche, and like animals with no further adaptability, we will stare into the abyss of extinction or at least perpetual misery.

    May I suggest that, before we surrender to our AI overlords and their ruling elite benefactors, we pause to appreciate not merely the speculative nature of this dismal scenario, but the ahistorical nature of his argument, as well?

    Specifically, we have faced the emergence of new technologies throughout recorded history, and that history illustrates that, while specific trades, crafts, professions and whole economic sectors may have been obliterated, humanity itself was not; indeed civilisations prospered by creating new opportunities created by our genius in re-imagination and creativity. Whalers, chimney sweeps, and blacksmiths no longer are central to either employment or the economy, however industrial chemists, private contractors, mechanics and engineers very much are, and we are all better for it.

    There’s more. Many futurists have historically taken a new or emerging technology, with which society had not yet come to equilibrium, and projected it, like an incompatible donor organ, onto an older, established and often vulnerable society of that time, to dystopic effect, in much the same way that an incompatible organ would either be rejected or lead to the death of its host.
    Rather progress, especially technological, has always been organic in nature. The integration of new technologies into society is associated with an adaptive response from society as shown by the birth of new industries and occupations (horses, carriages and paper ledgers yield to automobiles, aeroplanes, classic computers and the digital age), and the emergence of a culture that not only embraces that technology, but is progressively dependent upon it. And then, something even more remarkable and transformative occurs; a resulting emergent culture applies its genius to use cases never dreamt of by the previous generation. The writers for Star Trek, the original series in the 1960s did not imagine the flip phone/communicator doing more than making/receiving calls. Today, we have not only the iPhone, inspired by that vision, but the emergence of a whole new industry around app development, with the emergence of a digital culture where data are wealth and power, and indeed, even where currency itself is digital and a culture around everywhere access to an internet, in which the communicator itself is but one of many ‘smart’ devices. And while there is no indication that any of this is a ‘mature’ technology, we are already far beyond the STOS ‘communicator’.

    The point being, and here is where the analysis and projections of many a futurist stumble into ahistorical territory; humanity wills to survive. We are remarkably adaptable. Our collective history, without exception, shows that we excel at not merely creating new technologies, but adapting our civilisation to them, and using that adaptation, not to limit our options, diminish our quality of life and terminate our existence, but to transform ourselves and our world into something that further enriches our lives, provides new options and extends the likelihood of not only our survival, but our prosperity as well into the far future.

    Predicting a scary future is child’s play, and thus far the stuff of fantasy. Accurately predicting our future is exceedingly difficult; its experience, however, is a journey on which we are all along for the ride, and should do so with joy and confidence.

    Apologies for the length.

  • Didn’t Ebenezer/Tim release a policy recently about leaking information suggesting legal action? But it’s okay if he wants to drive the Hype Engine. Reminds me of revised rule Number 7: “All animals are equal but some are more equal than others.”

  • liberalism has begun to lose credibility. Questions about the ability of liberal democracy to provide for the middle class have grown louder;

    Before making a statement like this, it would be prudent for the author to look into this little thing we have called “History.” When liberal democracy is strong the middle class thrives.

    1. Agreed, however when the middle class isn’t thriving for other reasons, they turn away from liberal democracy. It’s a weird conundrum, people turn away from their best hope and embracing the worst option, but it’s happened before in a lot of countries. Some self destructive twist in human nature somehow.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.