Your Apple Watch Could Eventually Become a Complete BioMed Lab

Page 2 – News Debris for the Week of January 16th

An Apple Invasive Maneuver

I don’t know what to think of this first item. Yet. It could be a mistake by Apple. Or it could be something more insidious, agenda driven and just plain lame. Deserving of scorn. But until it all sorts out and I learn more, I’m just going to pass this on. “No internet connection? Be prepared for iTunes to drive you crazy.

About iTunes 12.5.3.16

The gist is that as of iTunes 12.5.4, you’ll be unable to play your music in iTunes, even songs you’ve ripped from a purchased CD, without incessant pestering to have an internet connection. (I’ve communicated with Kirk McElhearn, and he confirmed that this applies to any song.) Author McElhearn writes:

But what if you don’t have internet access? Your connection is down; or your router is broken; or you simply don’t want your computer to connect to the internet? Well, iTunes will remind you of this, over and over and over. In such a case, iTunes will pop up an alert every single time you play a song and every time one song finishes and another one begins.

Apple couldn’t possibly believe that this is a good thing to add to a music player that may not be on the internet, trying to play personally ripped music from CDs. If they do, there will be much to talk about. Or it could have been a careless decision that will be remedied in the next release. Stay tuned because I will be monitoring this closely.

[UPDATE: This issue appears to be fixed in iTunes 12.5.5, released January 23, 2017.]

More Debris….

On the AI front, I’ve found a couple on interesting entries. The first involved the well-known exponential effect of AI. Right now, humans design AIs. When we get to the point where AIs help human design AIs, progress will escalate. In turn, when AIs alone design AIs the effects will be exponential. Now this article doesn’t delved into that specifically, but its a taste of what’s in store. “Artificial-Intelligence Developers: We’re Thinking beyond Autonomous Cars.

For entry #2, I ran across this topic last week, but couldn’t find the recent reference again. The original discussion is from early 2016, so that’s the best reference I have right now. It’s worth revisiting. It all has to do with the conditions under which an AI or robot/AI might refuse an order from a human being.
Why robots need to be able to say ‘No’.” Fascinating.

The robot that refused an order.
Robot refuses order to advance. It’s not safe. Until human says he’ll catch it.

There is, I think, an emerging trend in software driven services on the internet. Namely, the capabilities, computer languages and budgets for human beings to build the grand services that can be envisioned is outstripped by the complexity of the code and the scope of the project. In other words, it seems no amount of testing can ensure that a grand service delivery project can function at a high level at the outset. As a result, services are rolled out that frustrate rather than delight. Even Apple is subject to this effect, although I think Apple is better at it than many other companies. For example, “DirecTV Now is seriously broken.

Have you noticed that Safari can’t play the latest 4K videos (posted after 12/6/2016) from YouTube? There’s a reason, and Rob Griffiths explains. “Safari and the YouTube 4K video problem.” Solution? Use Chrome.

Asking smart questions is always a good way to explore modern technology. The Street has some for Apple’s foray into original content.

I bring up the next item because it plays into modern tech and that’s always related to Apple. Namely, why do physicists make such good programmers? (I know. I was one.) I think it’s because so much of modern tech involves the representation of real world phenomena. A good example is asking a programming team at Pixar to represent flowing water. Or hair in the wind. The understanding of physical principles and the casting into code that visualizes is at play here. This physics expertise is also necessary for realistic games. More discussion here: “Move Over, Coders—Physicists Will Soon Rule Silicon Valley.

In the early days of computing, input was via analog plugs and switches. Then came paper tape, magnetic tape and punch cards. Then we graduated to the command line. Then came the WIMP GUI, windows, icons, menus and pointer. Now, we’ve transitioned to the touch interface. That means the Mac is an endangered species. However. Dan Moren makes the case. “The case for a touchscreen Mac.

Finally, does a corporation that holds and transmits private personal data and communications for its customers have the right to inform the customer when the government wants to see that data? At issue is whether a corporation can act on behalf of a customer’s Fourth Amendment rights. Here’s an good explanation of the issues. “Microsoft’s standing to sue over secret U.S. data requests in question.

______________________

Particle Debris is a generally a mix of John Martellaro’s observations and opinions about a standout event or article of the week (preamble on page one) followed on page two by a discussion of articles that didn’t make the TMO headlines, the technical news debris. The column is published most every Friday except for holidays.

8 thoughts on “Your Apple Watch Could Eventually Become a Complete BioMed Lab

  • Adding to wab95’s comments, imagine Samsung wanted to release a self-driving vehicle or a robot, and wanted to beat Apple (or another manufacturer) to market. Would they wait until it could “be safely deployed, and will accommodate not simply inevitable lapses in human judgement, but will be reasonably hardened against malicious intent?” Regulation must be required to prevent safety flaws.

    I’ll be relieved if robots and robotic cars are required to have a simple human override that disables them when necessary.

  • John:

    Of the themes you’ve chosen for this week’s PD, I find two most intriguing, namely the Apple Watch as a medical device (shockingly), specifically a bio-med lab, and second the interplay between robotics and AI, specifically the capacity of a robot to refuse a human command.

    You, and other writers at TMO like Bryan Chaffin and Jeff Gamet have, on several occasions, addressed the topic of the Apple Watch and health, specifically the role of sensors to monitor specific health indicators, and several comments have followed, including from yours truly, outlining both specific technical issues and longterm, high level vision pieces of where this will lead. I don’t intend to restate those here. Rather, I’d like to address the issue raised in the Matthias Scheutz piece about robots saying ‘No’ to a human command and expand that to the issue of technologies, like the Apple Watch and robotics writ large that can have a direct effect on human safety and address the role and imperative of manufacturers of those technologies to also say ‘No’ at the appropriate time.

    A core concept in the Matthias Scheutz piece is the question of context. Merely giving a robot a command such as ‘throw the ball through the window’ requires context, such as whether or not the window is open or closed, or whether there would be broader safety concerns, like traffic outside the window that the ball could negatively affect; context that a human would use, before deciding whether or not it is ethical or safe to obey the order. This is not easy to programme and raises an oddly related issue, namely that of the oft-quoted, ‘With great power comes great responsibility’, not simply for the device but more importantly for the creator and/or manufacturer of the device. Simply because something can be built, is it safe, ethical or even moral to build it today? The context here is not the technology, which might already exist, but the safety of the human user and that of society more broadly.

    Before we build more capable robots en masse, like self driving cars, or additional sensors in our wearables, like those that can monitor health parameters that people will rely on to serve and to protect them, that decision must guided by the protection and safety of the human first. An example or two might be illustrative. Imagine a self driving robotic car that could be remotely hacked, started and told to drive through the garage into your living room, use its forward camera and infrared sensors (newly installed for enhanced HUD night driving) to locate anything with a heat signature, identify a specific individual and run them over. Far-fetched? I think not. Remote take over of automobiles has already been demonstrated and security patches post production have had to be installed at a cost. Security systems will need to be hardened and multiple redundant safety subroutines in-built before robotic cars will be a safe technology for mass rollout. How about new sensors in the Apple Watch, like one that can monitor blood sugar, as an example? All sensors have a calibrated uncertainty, characterised by what we call ‘confidence intervals’, that is, a range within which the true value resides. Due to the technical limitations of that sensor, the range will be greater at the extremes, meaning that the value displayed to the human could be markedly different than ‘the truth’. No matter how often Apple, or any manufacturer, might warn people that this is not meant to be a ‘medical device’, should that range be too great to qualify as a medical device, people will still use it as such, because just like the camera in your iPhone vs a DSLR camera, the best camera is the one that you have on you when you need it. Diabetic wearers, when seeing that, most often when their blood sugar is in the normal range, the Watch gives a similar reading to their dedicated medical device, will come to rely over time on their AW, and may be lulled into a false sense of security when their sugars are abnormal and their Watch is not accurately telling them just how abnormal it is due to the uncertainty and limitations of the sensor. It will happen. Ask any ER doc.

    Let me cite another hackneyed expression, ‘You’ve got to crawl before you walk, and walk before you run’, a citation often belied by toddler behaviour, but I digress. We now live in an age in which, as our technology becomes more capable, particularly technology that relies on AI, including one as simple as SIRI, it begs the ethical question of whether that technology should be publicly available if it lacks sufficient capacity either to be used responsibly or, in the case of AI, to make responsible decisions that will protect the human user. Not only manufacturers, but consumers will need to accept either more simplified, less capable versions of these technologies or forego those technologies altogether, until these more capable technologies can be safely deployed, and will accommodate not simply inevitable lapses in human judgement, but will be reasonably hardened against malicious intent, however defined by regulators.

    In short, until those technologies can reliably assume the responsibility that matches the power with which we invest them, we, the human makers and consumers, must retain the responsibility to delay their adoption until they are safe.

  • Why does it have to be non-invasive? I’d be OK with a sensor or two under my skin or, perhaps, even deeper, that my Watch can connect with (securely, of course) that can detect medical issues or provide data that is more accurate and reliable than hoping I sweat enough for my Watch to pick up. Well, maybe… until Google hacks it from a passing android phone and starts to send me emails about dialysis equipment that I may need down the road…

    🙂

  • Regarding the iTunes fiasco, I see two scenarios for this to have happened. A similar problem has occurred with the new (improperly named) iOS TV app from Apple (which replaced the Video app).

    1. Apple believes that users have purchased ALL their audio and video content FROM Apple. So, in order to verify that the audio or video you want to play is legally yours, Apple has to check that you have purchased it (from Apple). Of course, that requires an Internet connection. (The iOS TV app actually has the reverse problem. You have to disable Wi-Fi and Cell networking to view your non-Apple purchased videos, unless you have changed the “media type” of the videos in iTunes.) Of course, Apple is wrong in this belief. I get most of my audio content from archive.org. That audio (old time radio shows like “X Minus One”, “The Lone Ranger”, “Jack Benny”, etc) is in the public domain.

    2. Apple software quality assurance is failing to perform its job.

    I suspect that #2 is the reason for this most recent iTunes mess. It is also probably the reason behind the iOS TV app mess. As more evidence of the failure of software quality assurance I offer the recent iOS Garageband update. Version 2.2 was released on 19 January and 2.2.1 on 20 January. That is just sloppy. Apple’s software quality has deteriorated in recent years and Apple should be embarrassed. The hardware quality of Apple products remains generally high (with the exception of the Time Capsule, which Apple has now killed), but that quality is the responsibility of many companies, not just Apple. Apple alone is responsible for the quality of its software.

  • And of course measuring the oxygen level of your blood is easy – when the hemoglobin in your red blood cells has oxygen bound to it, it is distinguishable spectroscopically from hemoglobin that does not have oxygen bound to it. There are already commercial devices out there that exclusively perform this function, and integrating this tech into a watch should be relatively easy.

  • The fact that your perspiration has the same glucose concentration as your serum has been known for a long time, and the idea of developing a watch that monitors glucose concentrations non-invasively based on this was first pursued by a start-up co (Cygnus) in Redwood City CA many years ago. It was specifically targeted towards diabetics for obvious reasons – who wants to prick themselves with a lance to get enough blood to do a glucose test? I know all this because a good friend worked there – they even successfully launched a product called the Glucowatch. It has since been discontinued for reasons that I don’t know; you can read about it here: http://www.mendosa.com/glucowatch.htm

    Of course that’s all the Glucowatch did – with new tech and miniaturization, I’m sure it could be done more effectively and with a much smaller footprint. The Glucowatch did require consumables (enzymes)…

  • I like the idea of a “lab on your wrist”. That would be the sort of thing that would get me to get an AppleWatch.

    You mentioned the CBC Blood Test. Sorry but I immediately wondered why the Canadian Broadcasting Company needed blood tests.

    Yes that iTunes thing would be bloody annoying. However I just can’t believe Apple would ship it like that. Even THEY have to know that sometimes people are offline. This had to be a beta glitch that won’t appear in the final product.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.