Is Apple along with the rest of the industry approaching ML the wrong way?

I have thought about this deeply. My conclusion that Apple puts eggs in the ML basket that simply does not fit. ML/AI will never be good enough to read the future since humans are not a binary machine. I do not think the should base Widgets, Searches, Grouping of most used apps using ML. It’s fine when calculating graphics to offload a processor but it fails when it gives me suggestions of what app to use.

This is the big failure of AI that no one seems to talk about and the reason you can’t use AI to determine human actions in serious situations. It’s why self-driving cars will never be a reality.

Let me give you two situations where AI fails. First say you got a self-driving car, in front of you are three obstacles on the road you are about to hit. One old lady, a kid, and a ditch. What would you like the AI to hit? breaking is not an option. AI cant tackle the problem of morals.

The second problem is daily automation. AI learns your daily pattern, nails it perfectly. All lights are adjusted to accommodate you going to work, locks are locked, etc. Well, one day you fall sick or need to take a few hours off or go to the store or whatever. AI breaks.

So my impression of ML/AI is that it has Its place. Like processing huge chunks of data, detecting cancer, etc. But not in making predictions about human patterns because they are seldom detectable patterns.

I’m ranting your thoughts?

There’s a difference between “AI” as seen through media then the reality used in programming. There may be people who intend to realize a vision of self-driving cars and robots who can anticipate your everyday move, but right now AI/ML is not all that.

In order:

It is my understanding that Widgets and the new App Library never explicitly mentioned that they are being powered by AI/ML. I’ve been reading/watching the sessions on Widgets and it seems the widgets inform the OS that they need to update by providing a timeline of relevancy scores (which is apparently how it works for updates to complications on watchOS). The only one that may be using AI/ML is the “Smart Stack” but I wouldn’t be surprised if it’s just checking to see which in the stack has a highest relevancy score.

There is tons of research and literature in the academic community intending to figure this out. Up to this point, there has been no company that’s provided an AI model capable of doing what you’re describing. (I don’t know whether I want to be optimistic about whether there ever will be)

I imagine this is mostly a limitation of how it’s implemented.

But essentially, my point is that at least currently AI/ML is an additional style of programming where a developer does not encode absolute conditions for actions to run (the way pretty much all code otherwise is written), but incorporates chance and external factors in to determining actions to run. Whether it is successful or not depends on whether the model is trained correctly.

Succinctly put, it is not a question of if AI/ML should be used but when. Specifically when it will actually be helpful. :slight_smile:

The issue here is believing that ML needs a 100% hit rate when it does not. Also not all ML is predictive, much of it is reactive. ML based searches for example work extremely well, as Google have been able to attest for more than a decade.

ML needs a consistent pattern to make a prediction from. The less consistent your use patterns are the less successful it will be. That doesn’t mean that others aren’t getting a lot of benefit.

Self driving cars are a reality. If you live in a major US city you’ve probably encountered them on a weekly basis. They’re also doing regular long haul journeys in the US and parts of Europe. They’re also already significantly safer than human drivers.

Again they don’t have to be perfect to be an improvement. They don’t have to be predictive. They’re significantly better at being reactive than humans are.

ML is actually extremely good for this kind of application.

That’s not a moral dilemma and ML will pick the ditch just like a human would. This isn’t even a significant problem. It’s also going to have a greater success rate at hitting the ditch than a panicking human, and is less likely to lose control of the vehicle.

This is a super simple target priority selection and performed by video games billions of times per day.

The ML has not broken in this situation. Human interaction will always supersede for an unpredictable break in the pattern. The ML assists with routine, it doesn’t enforce it. This is a non-issue.

Sociology teaches us that humans are extremely predictable. And again the predictions don’t need to be perfect, only good enough most of the time. When it comes to reactive situations ML already vastly outperforms humans.

1 Like

I will try to adress your point without quoting. Offcourse 100% hitrate is not required, but it depends on what action you want it to utilize. Google is reliant on you writing the first couple of letter and then figures the rest out. That is fine and works well. When it comes to guessing what app you want to open ML is horrible because the pattern is almost always obscure. Mind reading is hard even for AI.

Concerning cars there is a moral dilemma, say the ditch is a bridge, who should die? driver old woman or young person? The target selection is a moral dilemma. That is why Autonomy will always have constraints. I wont even get into the sensor mess.

As far as everyday patterns there are to much to calculate. Humans are not extremely predictable and multiple humans interacting are even less predictable. If the predictions needs to be perfect is dependent on what application you want to put it to. Perhaps this will be solved with an insane amount of input sources. But as is today we are not even close.

Finally what I have seen that has impressed me within AI/ML has not been in the form of everyday application usage such as human pattern prediction. Rather as I stated improving computational power.

And yet it got basically 100% success rate on the app for the coffee shop on my way to work. It’s not intended to predict every situation, just enough to provide convenience where there are strong patterns.

But it isn’t a moral dilemma. This is comparable to the runaway train scenario. The right answer is always clear to everyone, but the thought exercise is intended to highlight that it’s difficult for people to articulate why it’s clear. Ethics will dictate that you always avoid including an uninvolved individual. The driver is involved in the accident in all situations. If the driver has a hundred orphan baby passengers the priority remains exactly the same. The path which avoids an otherwise uninvolved party is always the correct choice. It’s always the ditch, or the wall, or whatever. It is never the old person or the child. You never have the right to sacrifice the old woman because you have judged her life to be somehow less valuable than another. Choosing either the young person or the old person would also be illegal for the very same reason. It’s manslaughter, or a lower degree murder, depending on your locality. So the car is just programmed to follow the law.

ML enforces this with better accuracy than a human ever could. It’s not a dilemma at all.

1 Like

Suicide is also not legal. My point is that it is not straight forward. What if it was two people in a tunnel, pick at random?

Thinking more about it there are even more problems. Who goes to jail if the AI makes a misstake, the developer? the company? the person driving?

To add to that I get terrified when I read that the solution to complex ethical problems can be solved by a Super Mario algorithm. There are fields studying ethical machine learning, biased data (Microsoft debacle).

That is just a few of the points why applying glorified ”if” statements should be treated with care and the areas should be handled with extra care.

As is today I wouldnt trust AI to automate even my bathroom light. I can break that automation/ai pattern in a potential hazard way. How? I am at home, I usually spend 15min in the bathroom. Fine AI prediction and pattern turns my light of after 15min all well for me. Not so well for my elderly mom who spends 30 min in the bathroom who just cracked her skull slipping in the dark.

You have to be really really carefull before putting autonomous decisions in the hands of a computer.

That’s a false equivalence and it’s also offensively reductive and insensitive. Dying in a motor vehicle accident is not suicide. Choosing to end someone’s life to preserve your own is a very different thing.

This is a good question. Litigators will have to figure out liability issues. This is why currently autonomous vehicles have a human to override them and investigations are used to determine liability, and whether there was criminal action (just like an accident without a self driving car). Also there’s no law against having a motor vehicle accident. Unless of course you were breaking a law when it happened, or you intentionally involved someone in that accident, or did so through negligence.

Remember, these cars are on the roads right now in California, Spain, and a bunch of other regions. They have been involved in accidents and somehow the legal system coped just fine and wasn’t crippled by these non-existent problems.

Again, this is reductive.

Again this is false equivalence. Biased sample data is a very different issue than your supposed but not actual moral dilemma. Also the biased data issue is caused entirely by the humans involved and is not a fault of the ML application itself.

It’s a straw man to suggest that a video game is going to use the exact same solution that’s implemented and it’s intellectually dishonest to suggest that there is no similarity in the problem being solved.

Because your implementation sucks. This is nothing to do with ML or automation. It’s your terrible implementation.

If you go into one of my bathrooms, which have automated lights, they will stay on until three minutes after you leave, be it 20 seconds or 2 hours. Your lack of ability does not reflect poorly on the technology, only your ability to implement it.

This appeal to emotion thing about your grandmother being unable to cope with the darkness also doesn’t wash, just for your information. Take your “but think of the children” somewhere else.

Yes you do. Luckily people are capable of this.

I’m closing down this thread. When you start bringing up suicide to support an argument it no longer belongs in this forum.

5 Likes