In this episode,@MacSparky said something about WhisperMemos processing text locally other than to send the results by email:
“Now, I wrote the Whisper Memos guy, and he wrote me back to say that they put the model on your device in the app, that it doesn’t go to the website, but they do email it. So they are getting the text.”
This is what the Whisper Memos website says:
- Private mode – You can opt-out of storing transcripts in your account, and instead just send them to your email. We’ll process the audio, and delete any traces.
- Processing – For transcription and AI processing, we only use OpenAI. No other services are involved. As long as your trust OpenAI, you can also trust us!
- Databases – We don’t use our own servers, and instead rely on Google Firebase for authentication and data in your account.
Private mode appears to be a commitment not to retain transcripts, which presumably takes Google Firebase out of the loop, but it doesn’t seem to alter the LLM processing behavior. As I understand it, OpenAI’s commercial services always mean remote-server processing.
Am I missing something?
On a related note, I really wish developers would start including a very clear statement somewhere on their websites that lays out how user data is stored and processed. E.g.: We store your data in [text files, a proprietary binary format, an on-device database, a remote database synced with an on-device database, etc.] and transcribe it using [remote servers operated by OpenAI/Google Gemini/etc., open-source local models from whoever, etc.] …
I realize too much detail might be a competitive issue. But something would be great. Some people like the speed and reliability of database storage. I prefer plain text files. Some people can’t send data out to a remote server, some don’t have reliable internet; others like the speed and features remote processing can bring. And of course, plenty of people wouldn’t care about any of this but they don’t have to look at it.