DevOps News Selection, Real Life Automation and AI Generated Voices | 🎙️#35
How do you navigate your professional news flow? Should you automate that one lady who sells you bread and beer? Can Kirill replace himself with AI? And does size matter in LLMs? Leo, Pablo and Kirill are together again for another chat.
- mknews and balancing our news selection process;
- Did Amazon fail with their automatic shops?
- Are there parts of life better left alone without automation?
- Is Kirill human?
- A new LLM from Apple;
- LLMs and context size.
You can listen to episode 35 of DevOps Accents on Spotify, or right now:
In the digital age, where the line between information and noise is increasingly blurred, the emergence of mknews stands as a beacon for those seeking reliable and impactful news. mknews is a biweekly news show dedicated to delivering meticulously curated news segments, emphasizing the importance of quality over quantity.
Curating News with Precision and Care
mknews differentiates itself by the rigorous process through which topics are selected. Leo, one of the co-founders of mkdev, expressed his initial skepticism about news digests. However, mknews has won him over due to its commitment to delivering news that is not only relevant but deeply vetted for accuracy. This is particularly notable as Leo typically relies on platforms like Reddit for his news, where the rapidity of news flow often compromises depth and verification.
The problem I described before is that you have these 50 newsletters, et cetera, to find important news. Now, we have to go through all these newsletters to select the important news. It's hard to understand what's important because, if you look at last week—Google Cloud, next three days of announcements—Google announced around 100 plus things. How do you compress that into this news episode without compromising all the other news that happened in AWS or other clouds? It's difficult, and we probably learn a lot about how to select the good news. Some news are obvious, like the OpenTofu against HashiCorp case, which was all over the news outlets. It was easy to pick this one. Others, like AWS retract cost allocation tax application, are among the 200 things released in any given week, but I picked it because I think it's super important for FinOps. That's something I find really useful for our customers and all the companies that need to start doing FinOps properly. So then, we had the news episode, and where this balance still needs to be found, because it would be foolish just to have episodes containing only the biggest news. — Kirill Shirinkin
The Dynamics of Amazon Stores: A Mixed Review
The debate surrounding Amazon's venture into physical retail stores has garnered mixed reviews among industry observers and our hosts. Pablo, one of the co-founders, offers a more nuanced view, suggesting that while Amazon's physical stores haven't replicated the phenomenal success of their e-commerce platform, they shouldn't be dismissed as a failure. He points out that these stores have achieved some strategic wins, such as integrating advanced technologies and gathering unique consumer data that could be valuable for future retail innovations. However, the transition has been costly, involving high operational costs due to the extensive use of technology like numerous cameras intended to streamline operations, which, paradoxically, ended up increasing the necessity for human oversight.
On the other hand, Kirill argues that from a technological and financial perspective, the initiative must be considered a failure. Despite the substantial investment in a high-tech solution for a cashier-less shopping experience, Amazon has struggled to reduce the human labor needed to monitor and correct the systems, leading to sustained high costs without the anticipated efficiency gains. Furthermore, Kirill mentions a privacy concern, noting that the extensive data collection involved has been controversial, potentially alienating customers more comfortable with traditional shopping environments.
I think, again, that the only reason was because you wanted to be differentiated from the competition, and you tried to make the place appealing because, in the end, it started in California. Everyone in California wants to be cooler than the previous day. So, they need to do something as cool as yesterday because, if not, they are not happy. And it's super cool to go there, grab a bottle of water or a soda, and then leave the store. — Pablo Inigo Sanchez
Cultural Customer Service: Canary Islands vs. Germany
The conversation also touched on the cultural aspects of customer service, contrasting the approach in the Canary Islands with that in Germany. In the Canary Islands, interactions with store employees are often personal and warm, reflecting the laid-back local culture. In Germany, however, the emphasis is on efficiency, resulting in interactions that can sometimes seem impersonal.
The Frontier of Voice and Video Generation
Kirill shared insights into his experiments with voice generation technologies, which are reshaping content creation. This segued into a broader discussion on voice and video generation technologies, exploring their potential to streamline content production while also discussing the ethical implications and the need for careful implementation.
The first thing that came to my mind is that I don't need Pablo and Kirill to create another podcast episode. I just use this tool, put whatever I want you to say, and write the entire script. Then, with artificial Pablo and artificial Kirill, I don't even need to write anything else. I can go to GPT and have the script ready — Leo Suschev
New Developments in AI: Apple’s LLM and Concerns About Context Sizes
The discussion also touched on the recent developments in artificial intelligence, particularly Apple's new language learning model (LLM), which has stirred interest and speculation in the tech community. Pablo raised questions about the capabilities of this new model, especially in light of claims that it surpasses the performance benchmarks set by OpenAI's GPT-4. The conversation delved into what these advancements might mean for practical applications, such as enhanced interaction with smart devices or more intuitive user interfaces, and how they position Apple in the competitive landscape of AI technologies.
Kirill, on his part, brought a technical perspective to the conversation, focusing on the importance of context sizes in LLMs. He explained that larger context sizes allow these models to retain and process a greater amount of information at once, which significantly enhances their ability to understand and generate more coherent and contextually appropriate responses. This capability is crucial for tasks that involve lengthy or complex documents, where maintaining thematic consistency is key. Kirill's insight suggests that while the raw computational improvements are noteworthy, the real-world utility of these models will depend greatly on their ability to handle large contexts effectively, thus broadening their applicability in professional and everyday settings.
Podcast editing: Mila Jones / milajonesproduction@gmail.com