A major problem with Artificial Intelligence (AI) development is that a time might come when AI’s are able to learn and teach themselves faster than humans can manage them. Recently, President Obama suggested AI’s that aren’t properly constrained and regulated could be unleashed on unsuspecting citizens and severely disadvantage them. Figuring out when to step in will be the great 21st century challenge for governments.
The time since most of the Macs have been updated can now be described as geologic. Is that because Apple doesn’t care about the Macs? Or, more likely, could we be in for another major architectural change? Evidence is mounting that Apple will abandon Intel and take the Mac lineup to ARM. John looks at the evidence and makes the case.
Much has been written now about the moral guidance for autonomous cars and trucks. It’s a difficult problem that involves quantifying then instantiating into software the logic of life and death decisions. It would be nice for society to have more time to ponder, but the pace of technology leaves us precious little time for that. Machines are going to make moral decisions very soon. Shall we let them?
One of the most important issues with the autonomous driving cars of the future is the partitioning of liability. To that end, new legislation proposed in Germany would require a data recorder to log when the car is under autonomous or driver control to aid in the assignment of responsibility. But such a box has privacy considerations. And it might be hacked. Would such a data recorder deter buyers? Could Apple overcome all this?