Is Apple along with the rest of the industry approaching ML the wrong way?

I have thought about this deeply. My conclusion that Apple puts eggs in the ML basket that simply does not fit. ML/AI will never be good enough to read the future since humans are not a binary machine. I do not think the should base Widgets, Searches, Grouping of most used apps using ML. It’s fine when calculating graphics to offload a processor but it fails when it gives me suggestions of what app to use.

This is the big failure of AI that no one seems to talk about and the reason you can’t use AI to determine human actions in serious situations. It’s why self-driving cars will never be a reality.

Let me give you two situations where AI fails. First say you got a self-driving car, in front of you are three obstacles on the road you are about to hit. One old lady, a kid, and a ditch. What would you like the AI to hit? breaking is not an option. AI cant tackle the problem of morals.

The second problem is daily automation. AI learns your daily pattern, nails it perfectly. All lights are adjusted to accommodate you going to work, locks are locked, etc. Well, one day you fall sick or need to take a few hours off or go to the store or whatever. AI breaks.

So my impression of ML/AI is that it has Its place. Like processing huge chunks of data, detecting cancer, etc. But not in making predictions about human patterns because they are seldom detectable patterns.

I’m ranting your thoughts?

There’s a difference between “AI” as seen through media then the reality used in programming. There may be people who intend to realize a vision of self-driving cars and robots who can anticipate your everyday move, but right now AI/ML is not all that.

In order:

It is my understanding that Widgets and the new App Library never explicitly mentioned that they are being powered by AI/ML. I’ve been reading/watching the sessions on Widgets and it seems the widgets inform the OS that they need to update by providing a timeline of relevancy scores (which is apparently how it works for updates to complications on watchOS). The only one that may be using AI/ML is the “Smart Stack” but I wouldn’t be surprised if it’s just checking to see which in the stack has a highest relevancy score.

There is tons of research and literature in the academic community intending to figure this out. Up to this point, there has been no company that’s provided an AI model capable of doing what you’re describing. (I don’t know whether I want to be optimistic about whether there ever will be)

I imagine this is mostly a limitation of how it’s implemented.

But essentially, my point is that at least currently AI/ML is an additional style of programming where a developer does not encode absolute conditions for actions to run (the way pretty much all code otherwise is written), but incorporates chance and external factors in to determining actions to run. Whether it is successful or not depends on whether the model is trained correctly.

Succinctly put, it is not a question of if AI/ML should be used but when. Specifically when it will actually be helpful. :slight_smile:

I will try to adress your point without quoting. Offcourse 100% hitrate is not required, but it depends on what action you want it to utilize. Google is reliant on you writing the first couple of letter and then figures the rest out. That is fine and works well. When it comes to guessing what app you want to open ML is horrible because the pattern is almost always obscure. Mind reading is hard even for AI.

Concerning cars there is a moral dilemma, say the ditch is a bridge, who should die? driver old woman or young person? The target selection is a moral dilemma. That is why Autonomy will always have constraints. I wont even get into the sensor mess.

As far as everyday patterns there are to much to calculate. Humans are not extremely predictable and multiple humans interacting are even less predictable. If the predictions needs to be perfect is dependent on what application you want to put it to. Perhaps this will be solved with an insane amount of input sources. But as is today we are not even close.

Finally what I have seen that has impressed me within AI/ML has not been in the form of everyday application usage such as human pattern prediction. Rather as I stated improving computational power.

Suicide is also not legal. My point is that it is not straight forward. What if it was two people in a tunnel, pick at random?

Thinking more about it there are even more problems. Who goes to jail if the AI makes a misstake, the developer? the company? the person driving?

To add to that I get terrified when I read that the solution to complex ethical problems can be solved by a Super Mario algorithm. There are fields studying ethical machine learning, biased data (Microsoft debacle).

That is just a few of the points why applying glorified ”if” statements should be treated with care and the areas should be handled with extra care.

As is today I wouldnt trust AI to automate even my bathroom light. I can break that automation/ai pattern in a potential hazard way. How? I am at home, I usually spend 15min in the bathroom. Fine AI prediction and pattern turns my light of after 15min all well for me. Not so well for my elderly mom who spends 30 min in the bathroom who just cracked her skull slipping in the dark.

You have to be really really carefull before putting autonomous decisions in the hands of a computer.

I’m closing down this thread. When you start bringing up suicide to support an argument it no longer belongs in this forum.

5 Likes