OmniAutomation scripting: How [and how not] to do Security

So I’ve just seen the 107: Control with your Voice thread and will have to give it a listen later. Voice control is very relevant to my interests as an automation language designer. However there was one comment under it regarding security that deserves a seperate discussion:

“On all platforms (iOS, iPadOS, macOS) the displayed script must be viewed in its entirety prior to the execution control being enabled.”

This statement really infuriates me as pure security theatre; the hacker equivalent of the notorious Quack Miranda, blaming the app’s users for Bad Things happening instead of holding the app responsible for knowingly allowing these bad things to happen in the first place.

Now, Omni’s far from the first or only ones to pull this excuse, so I’m not picking on them in particular. One of the motives for AppleScript’s famously readable (and infamously unwriteable) syntax was so that anyone could read a script to to see what it did before running it, and it worked just as well then as now (i.e. not at all). Or to quote Dr Cook from his HOPL3 AppleScript retrospective:

Finally, readability was no substitute for an effect[ive] security mechanism. Most people just run scripts — they don’t read or write them.

A security hole is a security hole. Blaming users for falling down it is extremely poor form, and I was honestly surprised to see Omni doing it. Omni have done a great job of supporting automation over the last 20 years (a better job than Apple has!), so I don’t think it unreasonable to hold them to those high standards when they do fall short.

However, since Omni have made this mistake (again), this is a good and timely opportunity to discuss the problem and how to correct it.

(If anyone wants to alert Ken Case & co to this thread, please do. I’m sure they can bring some ideas and thoughts of their own to the discussion.)

So, background:

There already is a correct way to do app-level security in macOS and iOS: sandboxing and sandbox permissions. And, for standard application bundles, pretty straightforward: the app developer lists the entitlements their app wants/needs to do do its job, and that list gets baked into the app when building it for distribution, and the whole thing cryptographically signed to ensure it can’t be subsequently tampered with without being detected.

When a user launches this app, the OS isolates this new application process in its own dedicated sandbox. That sandbox “physically” process that process accessing all external services—file system, web, other apps, etc—basically anything that has to go via the operating system. The app can still call those operating system APIs if it wants… but the OS just returns an error. To use an old school metaphor: imagine making a phone call, only to find your phone line has been disconnected at the exchange. You can’t call out: it’s impossible. This is security done right: the Principle of Least Privilege.

Of course, a lot of apps do need to access some external services, which is where entitlements come in. The app can tell the OS that it needs to access e.g. your ~/Documents folder. The OS asks the user if she wishes to allow this, and then either enables that access (and only that access) or permanently blocks it, remembering that choice for future use. Or, if the app wants to access the web, it must request access to a particular domain, e.g. “www.example.com”, and cannot access any other web location (“www.apple.com”, “www.my-phishing-domain.com”, etc are all out). To use the phone metaphor again, the phone company has restricted your ability to make outgoing calls to one or more pre-agreed numbers; you can call those but no-one else.

It’s a great system. In principle. How well it works in practice… it depends. Again, for “pre-baked” apps, it’s great. For scripting and automation it could (and should!) also be great. Unfortunately, right now it’s not.

What OmniAutomation (and every other scripting/automation system) should do at this point is ask the OS to create a new sandbox in which the app can run the user’s script, supplying the list of entitlements for that sandbox dynamically (not statically as is done for apps). The OS can then ask the user if she wishes to allow the script to access some/all/none of those services, remembering her preferences for that particular script only. If that script is later modified/replaced, the user will be re-prompted for permissions the next time it is run.

(For scripts which the user is writing herself, this automatic reset may be skipped on the assumption that the user knows what she’s doing and doesn’t want to be pestered incessantly by the OS being “helpful”.)

It’s worth pointing out here that such a system not only protects the user against malicious third-party scripts, it also protects against typos and other potentially harmful mistakes in her own scripts, e.g. a runaway rm -rf SOME PATH command, where the user forgets to double-quote "SOME PATH" and consequently deletes a large chunk of her filesystem. (And who amongst us automators has never made an oopsie like that?) If the OS knows to limit the script’s access to a single folder, it’s impossible for that command to delete anything outside it.

This is security that works WITH and FOR the user, which is as it should be. Unfortunately it doesn’t exist right now: there’s no way for an app to set a sandbox’s entitlements dynamically. Because Apple doesn’t see a market demand for it. They’re not wrong in this, but only because scripters and automators don’t realize this security system is something they should need and want, and thus demand. Again, because bad security systems treat scripters and automators like dirt, endlessly obstructing, frustrating, and generally being obnoxious and obtuse. Bad security serves no-one except sloppy/lazy developers evading the blame for dropping their users right in it.

OK, so having established that there is a Right Way to do script-level security, which Apple currently does not support, there is a workaround which might be used in the meantime. The workaround is to put a static sandbox around the script, and give this sandbox exactly one entitlement, which is the ability to send messages to the host app via a single XPC service. All access to external services (reading and writing files, accessing Calendars, composing emails, etc, etc) must go through this channel. The user’s script doesn’t interact with this service directly; instead they import libraries which hide the XPC stuff under familiar native scripting APIs.

For example: to read a file, a sandboxed .js script would import the fs library and call its fs.readFile() function as normal. However, where the standard (Node.js) fs library would call the OS’s fopen() and fgets() (now blocked by the sandbox), this sandbox-aware fs library sends an XPC message asking the host app to read the file for it and send back the file’s data, again over XPC, which the fs.readFile() function then returns as normal. (We’ll assume here the host app already has whole disk permissions.)

The trick here is for the script to have a standard mechanism to tell the host app precisely what services it requires. I can think of a couple ways this could be done:

  1. “magic comments” at the top of the script file, e.g.:

    // needs entitlement: read-write, folder: ~/Documents
    // wants entitlement: read-only, calendar: Work
    
  2. in-code function calls, preferably made when importing each library so the app can gather all the requirements and prompt the user to grant/deny all permissions just once (not one at a time at random intervals while the script runs, which is famously infuriating):

    const fs = require('fs').needs('~/Documents', 'rw');
    const calendar = require('calendar').wants('Work', 'r');
    

The only difference to app-level sandboxing (which is handled by the OS) is that this script-level sandboxing is handled by the app. If the script sends an XPC request for an external service for which the user has not granted the script permission, e.g. to send data to “www.my-phishing-domain.com” or delete ~/, the app refuses to pass on those operations to the OS and sends back an XPC error instead, which the script library can throw as a standard permissions error.

Now if Omni or anyone else wants to implement this per-script sandboxing system, it’ll obviously take a bit of work to 1. design a suitable XPC protocol for channelling service requests from sandboxed script to app and back, 2. implement custom fs, https, etc libraries to use those protocols, and 3. embed the JavaScriptCore interpreter within a reusable XPC plugin. Plus, most important of all, 4. getting lots of bums on seats, i.e. app developers, scripters, and users all using and loving it.

That said, the framework only needs to be written and documented once: it can then be distributed (free and/or paid) to other app developers who wish to add similar scripting support to their own apps. Do a really good job of designing, building, and promoting it [1], and perhaps Apple might eventually notice it being popular and successful, and copy its design (already field-tested and proven by hundreds of app developers and thousands of scripters) for a future Apple OS Automation framework that puts modern, safe, secure Scripting & Automation back at the forefront of all its platforms instead of being abandoned to moulder quietly at the back as it now is.

HTH

[1] Of course, a modern framework for executing scripts safely and securely is not a complete automation solution. There is still the general problem of allowing user scripts to talk to apps other than the one that’s running them, which on macOS historically means Apple events (now on its last legs) and on iOS means tunnelling arbitrary behavior through URL handlers, which is both Evil and Wrong and a whole security nightmare of its own. I suspect Apple quietly turns a blind eye to Omni and other apps abusing URL handlers for arbitrary IPC, if only because there is no official API for doing IPC between unrelated processes so they let it slide. But it only needs one successful malware exploit to make the popular press, and you can bet Apple will stomp that hole permanently overnight. However, solving the general IPC problem is really something that Apple needs to step up and do (e.g. by extending the existing XPC APIs to allow communication between arbitrary applications). And the best way to convince them to do it is by creating demand for it. So it’s a start.

While I mostly agree with the rest of your post, I do have 2 points:

  • I think the spirit of what omni tried to say is a valid point:
    IF you run 3rd party scripts on your systems you should at least review them to see what they do, and better still understand what they do. Any idiot can post scripts on forums and say “this does X” while in reality it might do X, but also do Y, which was not what you intended to do.
    As a network and linux admin in the past I’ve seen far too many users run scripts with the famous “sudo rm -Rf --no-preserve-root /” somewhere hidden in globs of explanatory text.

  • I’m not a fan of sandboxed apps, especially when you want to interconnect/automate workflows. Sandboxing might prevent you from messing up your system, but in the OF case would not prevent you from deleting all OF entries. Also it would prevent you from pulling in / exporting data to other apps and services, or the system, depending on the level of sandboxing.

I’m a fan of scripting, and work extensively with shell, Apple and JSA scripts
I read them all, and if I do not understand what they say I will not run them until I do.

1 Like

This sounds fine in principle but is unworkable in practice. Unless one asserts that only expert scripters should run scripts, and I shouldn’t have to point out here how outrageously elitist and exclusionary such an attitude is.

Furthermore, it is a comical double-standard: 1. Nobody requires application users to audit the code of every app they use before they run it. 2. App developers would riot were they forced to reveal their proprietary code and the business-critical IP in it. 3. Large apps can easily run to tens of millions of lines—who on earth has time and resources to audit all that? Go ask Linux users how often they read all the FOSS code that makes up their OS and apps. They don’t! The problem demands—requires—other mechanisms for trust.

To reiterate: this “Hack Miranda” is absolute BS; security theater for sole purpose of moving blame from developers to users. Were “read the code” a mandatory requirement before running a program, the entire world would be infinitely more productive just throwing out all the machines and going back to pen, paper, and slide rule. Which, BTW, is how computers originally worked.

Software that saves its users time and labor is useful. Software that makes users do more work is worse than useless. Especially when that added work is part of the developer’s area of expertise, not the users’, and furthermore carries serious quantities of risk.

The only way to build a genuine secure system (regardless of what type of software it runs—kernel extensions, desktop apps, webapps, end-user scripts, etc, etc) is to architect a system that strictly adheres to Principle of Least Privilege. Now this does not mean a system which obstructs the user from doing what they need to do (as poorly designed “security” systems frequently do, thereby giving security a bad name). It means putting the user in total control of what the software is allowed to do on her computer.

Security is a principle which is absolute. There’s no such thing as a 99% secure system; it is either 100% secure or it’s 0%. The developer must accept that constraint, then talk with her users to learn the particular needs, wants, working practices, and current frustrations and concerns of each kind of user. A career programmer has a very different set of computing requirements to your Facebook-ing gran, who has very different requirements to your Bitcon-mining gamer bro… and so on. Decide which of these markets you are going to support (one/some/all) and design your system from there. However, this is going beyond the scope of my original post, which specifically addresses scripting and automation security, so we won’t pursue it further.

But it should make clear that secure computing is secure computing. The type(s) of software being run is both orthogonal and irrelevant to that… or rather it should be.

Again, this is a bad rationalization to justify a broken viewpoint. Sandbox permissions absolutely can control access to a particular subsets of data and behaviors within a given system. It is just a question of how granular you want to make that permissions systems. Some examples:

  1. A basic Unix file system—which, despite the misnomer, is actually just a general namespace or object tree—has user and group permissions that can apply to any node or subtree within that namespace. That those permissions often aren’t used effectively is a UI/UX deficiency, not a fundamentally technical one. i.e. Setting and managing traditional *nix permissions is a painful and tedious chore; further compounded by vendors such as Apple regularly slapping their own duplicative permissions systems on top.

  2. HTTP (the communication protocol used by WWW servers and clients) likewise has mechanisms for declaring and enforcing access permissions. This can be as simple as requiring Basic authentication to access a specific URL, or allowing read-only access by allowing the web server to accept GET and HEAD requests but rejecting all POST, PUT, DELETE requests with a “405 Method Not Allowed” error [1]. Plus, of course, individual web apps can implement their own authorization schemes on top of that, at whatever level of granularity they require.

  3. And since you mention AppleScript, “AppleScriptable apps” (yet another misnomer) also have reasonably granular permissions in the form of access groups (see man 5 sdef). Within an app’s Scripting Definition (.sdef file) the app developer can group specific object attributes and behaviors into named categories. For instance, Mail’s SDEF groups features needed to send an email into a “com.apple.mail.compose” access group, most crucially outgoing message:

    <class name="outgoing message" code="bcke" description="A new email message">
      <cocoa class="ComposeBackEnd_Scripting"/>
      <access-group identifier="com.apple.mail.compose" access="rw"/>
      <element type="bcc recipient"/>
      <element type="cc recipient"/>
      <element type="recipient"/>
      <element type="to recipient"/>
      <property name="sender" code="sndr" type="text" description="The sender of the message">
      <cocoa key="appleScriptSender"/>
      </property>
      <property name="subject" code="subj" type="text" description="The subject of the message">
      …
      <responds-to command="send">
      <cocoa method="handleSendMessageCommand:"/>
      </responds-to>
    </class>

If a sandboxed app wishes to send out an email using Mail, it must declare in its entitlements that it wants read-write access to Mail’s “com.apple.mail.compose” features. From that, the OS can infer that the app wants to create and send emails and advise the user accordingly. If the app only requests access to “com.apple.mail”, it can read existing emails but cannot send them. The app can still send Mail a make new outgoing message … or send outgoing message 1 command, but that message is blocked by the OS at the Apple Event Manager and a permissions error is returned.

That virtually nobody knows about, cares about, or uses Apple event permissions (including AppleScript itself) is a marketing and education failure, not a technical one. (Which I won’t go into here as it involves excoriating a certain former Product Manager of Mac Automation for mismanaging it into its grave, and I have more important work to do now than re-flog that old corpse.)

I’m also a Mac scripting and automation fan. Arguably the biggest. I not only use it; I create technologies to enable it—including my own automation languages—which has become my professional career over the last 13 years. This is a field I have a frightening level of expertise and experience in, and I’m still first to say there’s a huge amount I don’t understand or even know about. This is why I utterly reject the “you must understand it before you may use it” as the toxic dysfunctional exclusionary BS it is. Because everyone has to start somewhere (you can’t have experts without beginners) and by far the biggest barrier to ordinary users beginning and continuing to use automation is Trust. Specifically the lack and failure of.

I started out a quarter-century ago, not in programming but in print publishing. (I have never taken a Computer Science class.) I can even tell you the very first script I even ran: Bill’s Broom. The gobsmacking realization that here was a tool with which I, an expert user but not a programmer, could make my machine do my work for me! Literally life-changing. Had I been required to fully understand Bill’s script before running it myself I would never have gotten started in scripting and automation—and neither would any other AppleScript user!

Because this is what end-user automation really is: We are stealing the fire of the gods for ourselves!

So when those same self-elected “gods”’ own inherently unsafe applications allow buggy or malicious code to wreck our data or pass it onto criminals, call them out. Either their code works for us too…or else what’s the point of it?

I rest my argument as to the Why. If Omni or anyone else would like to discuss the How to design and implement modern secure scripting for real-world users, I am happy to chat. But please, post no more apologetics. My RSI and your eyeballs don’t deserve it! (☉_☉)

HTH

[1] This is why tunnelling arbitrary code through URL strings (e.g. Omni and others’ abuse of URL handlers) is both Evil and Wrong. It violates the agreed behavior for the GET verb, enabling potentially destructive write operations to be triggered in what is supposed to be a safe, idempotent, read-only command. In Security 101 terms, such behavior is a downright criminal joke—and shockingly common throughout the entire programming profession.

I don’t think replying to your response makes too much sense for me, let me just say I do not agree

1 Like

That is your privilege, of course. But as a macOS user you already run tens of millions of lines of code every day, code written by many thousands of programmers, few if any of whom you know; and not only have you never read any of their code to ensure that it’s safe, you would (quite rightly) laugh in the face of anyone who demands you do so before you hit double-click. I think that inconsistency speaks for itself.

Please be mindful of others opinions else you will alienate a lot of people on the forum very quickly. I don’t know if you realise, but so far, your tone in this topic is coming across as quite adversarial.

I would note that security is hard and very little software is written with logical proof of perfect security. So the assertion of perfect security is almost always going to be impractical until or unless we change the way we write software; and yes, I did study Computer Science at University, but no I don’t work in IT security - so take the above as a generalised statement and if anyone wants details, there’s plenty out there and I recommend Slthe Security Now podcast in general for lots of security goodness.

The biggest issue, of course, is as soon as you introduce humans and computers working together there are going to be potential security issues. They may even have nothing to do with the computer, and they may not always be deliberate.

Much of what we rely on these days in terms of IT security are levels of trust. That goes for scripts, applications, operating systems, micro processor code for the chips in your devices, etc. It even extends to physical hardware for those who remember the alleged d addition of mystery chips in server news a few years back.

In terms of what you have proposed, it is not “the” right way to do things. It is “a” way that things could potentially be improved. It would not fix everything. It would impose some limitations. It may be the best option right now. It may not. It may vary across scenarios. I’m sure there are quite a few PhD theses out there with great ideas about how to improve things as well as cutting edge industry approaches.

If you want to take this up with the Omni Group specifically, I would suggest reaching out to them via a private channel rather than a public forum that is not something they would necessarily be monitoring. Certainly there is the ongoing beta, which would seem an ideal way to get involved and seek to make a positive contribution. I suspect you will get a better response opening up a cordial dialogue with a company than the shaming approach you have adopted in your initial post. I know how I would like to be contacted and spoken to if someone had a suggestion for how I could improve something.

3 Likes

Ouch? Chill pill please…

Securing a system is easy: just pull the plug out and reduce it to sand with a hammer. Of course, that is not very useful if you want to do anything else with the machine, so we try to make do. The problem with that “just make do with what we have” approach is we—by which I mean the great majority of computer programmers, from amateur automators to the highly degreed professionals working for Apples and IBMs—do nothing else. You, me, everyone. We are all dangerous, and the first step towards harm reduction is to accept and admit how dangerous we are. Automation should make us less dangerous, but all too often it makes us more.

We’re so conditioned to using dangerous tools, creating dangerous code, not taking responsibility for that code misbehaving, and denying and deflecting blame when it does, that a perfectly ordinary and reasonable consumer expectation—make product that is safe and fit for purpose—invokes the sort of outrage normally expected of distraught survivors of exploding Ford Pintos or a cratering Boeing 737 Max.

Bad automation kills. And the reason it does so is because at every step towards that result is because humans start making excuses. “It’s too slow.” “It takes us too long.” “It’s too much trouble.” “If we don’t cut corners, our competitors will and beat us to market.” All valid arguments, by the way. But there is a price to be paid for each, and most people deal with that bill by ignoring it and skipping out on the table.

And so my reasonable statement that Omni’s automation framework is unsafe, is seen as unreasonable.

Talking of bad automation: I haven’t paid an electricity bill in the last six months. I did, then my supplier canceled my account claiming somebody else was now in my apartment. Made multiple phone calls to their support. Multiple emails. No resolution, despite it being a tremendously simple problem: the person they think is at my address actually moved in four doors down. Meanwhile, the supplier’s automated system is sending demands and threats of collection to them, and they’ve made multiple phone calls and emails too, with the same result: none. This would never happen with a manual accounts and billing system. It would be resolved by the third phonecall within a month. I’m angry, my neighbor’s in tears. I was talking to a couple of the maintenance guys: turns out both had the same damn problem with their suppliers; one took six months, another a year!

Bad automation gets accepted, gets normalized. And so it does harm to people. Even kills! (Hello, Therac-25! Hello, self-driving cars! Hello, Facebook algorithms.) And the creators of that automation—Boeing, Uber, et al—are happy to put the blame onto the humans at the receiving end. When we let them. Don’t take my word: just read their own EULAs.

As an automation developer who builds software for others to use, I know I will screw up. “To err is human.” And when I do, I own my mistake: accept it, admit it, learn, and correct so that I don’t make it again. Working quick and dirty, making lots of screwups, is how my own work becomes good. If I blow up my own machine at that stage, no problem; it’s part of the job. But the moment I hand my automation over others to use, I have a care of duty to them—a professional, personal, ethical responsibility—not to blow up theirs. I do not always succeed. But I make every possible effort, including taking the blame for myself.

Again, I think that fair and correct; the standard to which we should hold ourself and the expectation for our users to have of us. Yet in this one particular profession, I’m the unpopular, dangerous one! Ralph Nader wept.

So. People here think I’m being awfully unfair to Omni, especially considering all they have done for us automators, by declaring their work is unfit, exactly why it is unfit, how to correct that, and refusing to stand for their “pass the blame onto the user” excuse. I disagree. If I had no respect for Omni and their work, I would not give them the time of day; they wouldn’t be worth it. Constructive criticism is the greatest gift that one creator can give another. Harsh, unflinching, pulling no punches. The highest compliment to their work: how to do it far better. (I trained as a figurative sculptor, btw. Not a skill for the thin-skinned.)

It is because Omni care, both professionally and personally, that I hold Ken Case and team to a higher expectation than I do, say, a vast, impenetrable indifferent behemoth like Apple. I hope that I’ve not misjudged them—I make lots of mistakes, of course—but I have faith I in them so think I have not.

Now, when you say “I would suggest reaching out to them via a private channel rather than a public forum”, I agree this forum is outside Omni’s radar. However, I was hoping that at least one person on here might already know Ken’s email and kindly give him a heads-up. Alas that appears a non-starter, so I will go see if I can find it or else just tap Omni’s general support for next week.

I strongly disagree, however, with your assertion that Omni’s dirty laundry should not be aired in public. Were it not for their Hack Miranda, I might have tapped them directly as a diplomacy. But when they publicly declared “blame the user” as their implementation policy then refuting that BS a matter of public interest as well. It might not be pleasant to see but it needs to be seen, and talked about; not hushed up as one of those “embarrassing little family secrets that we pretend doesn’t exist.”

As I say: we are all dangerous, and the first step in preventing any harm to others is to be ruthlessly open and honest about the risks we and our works represent; not pull punches, weasel, or DARVO.

If Omni can build a better, safer, more human and humane automation architecture than vast uncaring Apple—and I know that they can—then, who knows, perhaps they can even convince Apple in a couple of years to take it off them and make it their own, the gold standard over all of their platforms. And suddenly it’s opening up an audience for end-user automation of almost a billion users!

But if all of us insist on making, using, and tolerating without complaint unsafe, dangerous automation, then the only people who will ever use it are those who enjoy doing unsafe, dangerous thing—which in programming means mainly white, overwhelmingly male, middle-class, wildly over-confident and insecure, with unpleasant tendency to martinetism and controlling behavior. Which right now is a depressingly large fraction of people who write code. And our world runs on it.

Thank/blame Kernighan & Ritchie for that. Their C language is a genuinely incredible, revolutionary piece of work, with the unfortunate side-effect of birthing an entire generation of dangerous programmers who enjoy being dangerous, driving out everyone else different to them. And here is how that has worked out. And here is how it continues.

I dunno about you, but I prefer my automation all-inclusive. Helping, not harming, users.

(p.s. I also walk the walk: e.g. my own end-user automation language, kiwi, in action.)

With that, I return you to your regular conversation. Be well.


“Je n’ai fait celle-ci plus longue que parce que je n’ai pas eu le loisir de la faire plus courte.”—Blaise Pascal

Yup. I said I wouldn’t do that thing and immediately went and did it anyway. Absolutely fair criticism, 100% my bad, and I apologise. Thanks for pointing it out.

1 Like

Well a <10 second Google yields his email address, so you didn’t need to rely on this community to find his contact details (also, you know Omnigroup has a forum, right?); but it would probably be good to start with their support team

https://people.omnigroup.com/kc/


You certainly have quite the diatribe going, and I think most people probably get it. You are unhappy with how things are and you believe everyone else should be too. I’m not a crusader on this topic, so I will respectfully to your particular approach, address the issues I pursue in what is an alternative way.


As a final note, I would disagree with your first point…

That isn’t a system any more. That’s a pile of sand. What you describe is destroying a system not securing it as the system no longer exists to be secured. At best you have secured a pile of sand at worst you watch your system blow away in the wind.

1 Like

Relevant:

Arguably the biggest

Small pond though, mate – shrinks every time you get that blow-torch out.

Perhaps begin by:

  • halving the post count and length
  • working harder on the tone ?

This thread has amused me no end, as people unused to @hhas01—long-winded, opinionated, and impolitic—have their first encounter. He and Matt Neuburg—equally long-winded, opinionated, and impolitic—once had an argument in the comments section of my blog that must have been well over 10,000 words long. Memories…

1 Like