Meet the VPs Behind the Scenes at Apple

Apple Park

We hear at lot about the high-profile Senior VPs at Apple, but what about the regular VPs behind the scenes, VPs who keep the company humming?

The Particle Debris article of the week is from Mark Gurman at Bloomberg.

The subtitle, “Dozens of vice presidents are crucial to the Apple of today and the company of tomorrow.” foretells the description of men and women whose names are generally not known but who bear a healthy responsibility in carrying out the crucial work defined by Apple’s Executive Team.

Apple’s VP of Apple Pay Jennifer Bailey. Image credit: Forbes.

Perhaps one VP we do know well is Jennifer Bailey, VP of Apple Pay.

Bailey is a key Apple services executive. She oversaw the launch and development of Apple Pay in 2014 and continues to be its driving force in meetings with retailers and financial partners. So far, Apple Pay has amassed over 127 million active users, according to analysis from Loup Ventures…

While these VPs bear a lot of responsibility, they don’t have a significant amount of unilateral decision authority in broad matters. Their job is to execute the plan, and former VPs who didn’t undestand that had a tough time at Apple. That makes the work these current VPs do all the more remarkable. And appreciated. Check out this excellent work from author Gurman.

• There’s been a lot of speculation about two new, rumored Macs that may soon be upon us. I’ve been doing it myself with the new, low-cost MacBook Air and what I’ve coined the Mac mini Pro. Here’s yet more insight from two formidable Apple observers.

If there is one enduring theme I am clinging to myself on the these two, new Macs, it’s that the Mac mini Pro will be more formidable in power than anyone has surmised and will be priced accordingly. In terms of the new, low-cost notebook “thing,” Apple’s idea of low-cost isn’t the same as ours. Think no less than US$799 before educational discounts.

AI concept
An AI agent wouldn’t be bored with this next article.

• Internet articles on tech, in general, have to be bite-sized. When they get very long, the reader feels overwhelmed. So my congratulations to ZDNet’s Nick Heath for patiently producing an encyclopedic treatise on AI. “What is artificial general intelligence?” Subtitle: “Everything you need to know about the path to creating an AI as smart as a human.”

And I mean everything . You’ll be smarter yourself after reading Heath’s work.

• If this next article weren’t at BigThink,” I might have glossed over it. But in reading, I realized that there’s some substance here worth pondering. And so it makes the cut for Particle Debris. In 1973, an MIT computer predicted the end of civilization. So far, it’s on target.

That’s not to say one should take everything predicted at face value. Rather, I’d say, place it in perspective with what we know now. For example, the technical and social impact of denying climate change. See, for example, “Germany Has Proven The Modern Automobile Must Die.

• High school Students have been told for years that, without a college degree, they face a bleak future. However, for every rule there’s a exception. In this case, if one has certain specific skills that employers need, a job may be at hand with some of the notable tech giants. Still, read with care. It’s not just about landing a job in the near-term, but building the basis for a long, satisfying career. “Apple, IBM, and Google don’t care anymore if you went to college.”

• Finally, we’ve heard a lot about 5G wireless lately. Again, don’t get too excited yet. Chris Mills at BGR explains: “This is why Apple isn’t rushing to make a 5G iPhone.” Think 2020.

[Note: Particle Debris is just one page this week.]



Particle Debris is a generally a mix of John Martellaro’s observations and opinions about a standout event or article of the week (preamble on page one) followed on page two by a discussion of articles that didn’t make the TMO headlines, the technical news debris. The column is published most every Friday except for holiday weeks.

One thought on “Meet the VPs Behind the Scenes at Apple

  • John:

    Once again, an interesting set of readings. I regret that I hadn’t the time last week to comment on one of your PD picks, Adam Segal’s, ‘When China Rules the Web’. There were a number of observations that one could make from it, not least of which was the tacit acknowledgment that the post-PC era is well underway. Let me digress for just a moment to address this point, acknowledging that many writers at TMO appear to take umbrage at the notion of a post-PC era, pointing to devices like the iPad as a case in point that the PC still rules the device roost. I do not understand, nor have I ever understood, the post-PC era to imply that a new device will take over all functions of the PC. Indeed, SJ’s analogy of trucks and PCs underscores that for some tasks, a PC is still, and for the foreseeable future will remain, necessary.

    Rather, Segal’s article underscores a core definition of post-PC, namely that the PC no longer defines, or even dominates the modern computing landscape. Today, it’s about AI and the medium in which AI exercises its power, namely the internet which houses our access to essential resources, such as goods and services, including core infrastructure like finances, utilities, and commerce, and the plethora of devices that link the individual, along with their personal identifiers and data, to that internet 24/7. This is a fact that even MS under Satya Nadella have come to recognise and aggressively leverage under a revised business plan, as covered by Jason’s Perlow’s piece for ZDNet. This is why MS can let Window die. The death of PC primacy came with the suddenness of a lightening strike, but the unheralded subtlety of a plague-infested flea bite.

    Indeed, any major tech company that does not recognise this paradigm shift away from the primacy of the PC as the centre and core of computer tech, and its implications, be they over prioritisation of products and services, operating systems, cybersecurity, logistical vulnerabilities, and even larger themes such as nationstate vs private sector dominance over the rules of the internet, together with their implications for geopolitical hegemony, personal privacy, data and information access, and ultimately sociopolitical stability, are already dead; rigour mortis simply hasn’t set in yet.

    There are so many more implications of Segal’s article, including the effect of a China dominated internet on the lives of ordinary citizens worldwide, not to mention that the current isolationist and nationalist geopolitics of key Western governments, the resultant absence of a coordinated multilateral alliance to counter such an event, and the likelihood that the current climate will only facilitate and bring forward such Sino-centric internet influence, that time doesn’t permit further discussion. Another time, perhaps.

    Rather, it’s Nick Heath’s piece on ‘What is artificial general intelligence’ or AGI, that is rich with insight. In brief, in the section on ‘What is superintelligence’, he cites Kurzweil’s belief that this will be achieved in 2045 with the event known as the singularity; as well as Oxford’s Professor Nick Bostrom’s assertion that, should AGI become sufficiently capable, “it might seize control to gain rewards.” Fortunately, Heath balances such hysteria with the statement, “The problem with discussing the effects of AGI and superintelligences is that most working in the field of AI stress that AGI is currently fiction, and may remain so for a very long time”, and cites Baidu’s Andrew Ng’s observation that those debating AI and ethics should “cut out the AGI nonsense” and ‘spend more time focusing on how today’s technology is exacerbating or will exacerbate problems such as “job loss/stagnant wages, undermining democracy, discrimination/bias, wealth inequality”’.

    The fundamental problem with the AI dominance/singularity thesis, as I see it, is rooted in the only observation we have on intelligence, which is terrestrial; and that problem is not technological so much as it is ontological (how an individual organism develops). It takes the argument ‘Ontogeny recapitulates phylogeny’ as a fact (it isn’t), and puts it to the extreme; as you build a thing it will grow from unintelligent to intelligent, and from intelligent to conscious or sentient; and engage, as Bostrom argues, in the pursuit of ‘rewards’ via dominance. If a thing is to dominate humanity, it must beat us at our own game. The only model we have for any intelligence, not only our own, but that of reptiles, birds and mammals, is that first comes consciousness then comes learning and problem solving. Let’s set the issue of instinct aside, as most behaviours we associate with intelligence proceed from learning and observation. That observation shows that consciousness is the root from which intelligence arises, not the other way round. If the scientific community want to create a ‘super intelligence’ capable of being humanity’s overlord, then they should seek first to build, not an intelligent machine, but a conscious one, one that will have sufficient curiosity about the world around it that it learns, then seeks to solve problems at a level mere humans cannot. Given our current state of understanding of consciousness and the working of the mind, not just human, but any mind, we wouldn’t know where to begin. We don’t know what conscious is, let alone how it arose and certainly not how to create it. Hoping that it will spontaneously sprout from a network of CPUs is simply magical thinking, and not scientific, evidence-based reasoning.

    In the meantime, the real narrow AI challenges to human well-being, as outlined in Heath’s article, beg our immediate attention.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.