Voice Mode and Dictation | βοΈ #71


Hey! π
I've recently discovered a new tool which promises instantaneous transcription of voice to text. The idea is that you can achieve a much higher WPM by just speaking things out loud instead of typing them. It does work really fast, especially when I use English language, but it also made me realize how much different the output of my thinking process is when I dictate things vs when I write them down.
It's basically the same problem I discovered when I started making videos for our YouTube channel. It's nearly impossible to provide a good script or a good text for the videos by just improvising. In my case, the only way I can make the video really work in terms of explanation is by writing the script first and then reading this script while recording a voiceover.
So, there is no way around deep thinking and analyzing the problem before putting it into text. This makes dictation so problematic. It also makes using voice mode in different AI apps problematic because it's hard to get a good output from something like ChatGPT if you just talk to it without thinking. But it could be that it's just the difference in how people communicate. Perhaps my brain is more wired to type things out, and the brains of other people could be wired better by just talking things through.
I don't have any particular lesson or advice or anything useful distilled in my experience with dictation, and I definitely keep using dictation simply because it's much faster to get the text out of my head. Even though for a longer text I still type it and refine and tinker with the wording and so on. I do hope to get to the stage where I can just dictate things from the first attempt and get the result that is satisfying when I read it afterwards. It definitely took me several attempts to dictate this dispatch intro. And I am still not sure I got it good enough.
I wonder how it works for you and whether you used dictation a lot and whether you communicate with AI tools via the most voice mode or if you just text it?
What We've Shared
βVibe Engineering: How to Create Context for your AI Agent. How Vibe Engineering differs from basic AI-assisted coding, and how teams can effectively collaborate with AI agents by embedding product context and maintaining alignment with the codebase: check out this segment from episode 61 of DevOps.
Dockerless Course, Lesson 4: Container Bundle Deep Dive. In this lesson, Kirill Shirinkin shows how to unpack a container image into a runnable bundle using umoci. Itβs a hands-on look at what makes containers workβwithout Docker.
βUnderstanding AWS Data Transfer Costs: Worried about surprise AWS data transfer costs? Kirill Shirinkin breaks down where those charges come from and shares smart strategies to avoid them.
What We've Discovered
βHow I reduced $10000 Monthly AWS Glue Bill to $400 using Airflow: Always be skeptical of 96% cost reduction by simply changing the tool. Both Glue and Airflow are great, as well as technical details the author shares, but keep in mind that there is always a bit more behind big cost reduction than just switching a job executor.
βGetting Forked by Microsoft: To our shame, we've learned about Spegel from this blog post. On the other side, Microsoft is the one who should be ashamed, while we all should totally check out and play around with Spegel.
βReplacing CVE: Even if you won't agree with the author's proposal, at the very least it's a good starting point to learn about existing issues with the current CVE setup we all live with.
βPremature optimization: Mental models for detecting premature optimizations and avoiding them. As a bonus point, author shares his thoughts on mature optimizations.
βHacking the Postgres wire protocol: Nerd out and learn how Postgres protocol works. One of the enterprise use cases for this: logging every query that is being run by hand, for audit purposes and more.
The 71st mkdev dispatch will arrive on Friday, June 20th. See you next time!