Are We Safe in This World? | ✉️ #50
Hey! 👋
Recently, I discovered Perplexity thanks to a recommendation. Initially, I thought, "Oh, another tool trying to replicate what ChatGPT does." For the next few days, I didn’t give it much attention. However, after seeing my business partner repeatedly demonstrate its capabilities, I decided to give it a try. To my surprise, I was impressed. The primary distinction between Perplexity and other large language model (LLM) chatbots is its ability to link every output to a specific web source. This makes it more than just a standard generative AI; it’s able to create a deep-search internet generative AI. The results are remarkable. For instance, if you inquire about a recent event, such as the issue between Argentina and Morocco in the Olympics, Perplexity provides a comprehensive answer.
Each piece of information is sourced from a reputable publication, offering the truth. But this raises a critical question: what is the true source of truth? Who ensures that these sources are unbiased or represent a balanced view? During my tests with Spanish news, it became evident that there is a bias, as 80% of the information on any topic came from the same newspaper.
If these tools continue to gain power, will we be well-informed, or only informed from one perspective? This tool is concerning, and I haven't even touched on what happens when you introduce the names of non-famous individuals. The way it scrapes all internet information and transforms it in one direction might simplify our minds and stop us from thinking critically.
I love AI tools and what is happening in this field, but one thing is clear: we are not safe.
What We've Shared
On our Youtube channel we take a look at a snippet from the previous episode of DevOps Accents:
While in episode 42 of DevOps Accents our guest is Ara Pulido, Staff Developer Advocate at Datadog:
And on the website we have two new articles:
Cloudflare for SaaS: Resolving AWS Connection Issues with Multiple CNAMEs, DNS, and SSL Capabilities with Pablo
And Kirill answers the question: Is AWS AppRunner the Worst Way to Run Containers?
What We've Discovered
Streamlining Terraform Module Management with GitHub Actions, Semantic Releases, and Terraform Docs: A simple setup that covers everything you need to properly manage releases of your Terraform modules, whether internal or public ones.
How we improved push processing on GitHub: Beautiful refactoring of the monolithic way GitHub handles background jobs for each Push event. Introducing Kafka as a safety buffer to then schedule new independent events is smart.
Using LLMs to Generate Terraform Code: tl,dr; - LLMs are only mildly useful for DevOps work.
A random reminder
Did you always want to have an mkdev t-shirt but our white was too much for you? Check out our black neon collection, it might suit you better!
The 51st mkdev dispatch will arrive on Friday, August 16th. See you next time!