§ Blog · Signal from the noise
Live tech feed from trusted creators.
Auto-aggregated articles, YouTube drops and Spotify podcast episodes from a curated allowlist — each with a short AI-written brief and a link straight back to the source.
85 items
- VIDEOUnbox Therapy · May 2, 2026
The Most Convincing Art I've Seen...
The evolution of display technology persistently seeks a more natural, less intrusive visual experience, moving beyond the backlit glow that defines modern screens. This pursuit towards interfaces that mimic the organic qualities of paper has found a compelling demonstration in a recent Unbox Therapy video, where the focus shifts to an e-paper display that presents art with striking verisimilitude. The piece evaluates the capabilities of a particular e-ink poster, showcasing its ability to render static images in a manner that blurs the line between digital display and printed canvas. The video highlights a display technology developed by InkPoster, emphasizing several key attributes that distinguish it from conventional screens. Central to its appeal is the absence of glare, backlighting, and blue light, contributing to a viewing experience that is both easy on the eyes and remarkably lifelike. The demonstration reveals an image quality that, in certain contexts, could easily be mistaken for a framed print, particularly when viewed from a distance or without direct comparison to a traditional monitor. One notable technical specification mentioned is a battery life extending up to a year, indicating significant power efficiency for a device intended for static display. This exploration into advanced e-paper presents a pertinent case study for software, AI, and product builders considering ambient computing or passive display interfaces. Understanding how such displays can integrate into smart environments, provide dynamic information with minimal energy overhead, or even serve as adaptable digital decor, opens avenues for innovative product design. Builders should consider the implications of truly paper-like digital surfaces for user experience in contexts ranging from public signage to personal productivity tools, exploring how the unique constraints and benefits of e-ink might reshape interaction paradigms.
#Hardware#ProductRead brief →
- VIDEOLinus Tech Tips · May 1, 2026
Microsoft Has Promised to Fix Windows - WAN Show May 1, 2026
The pervasive sentiment among developers regarding the state of Windows as a platform, particularly concerning its user experience and development environment stability, has long been one of cautious optimism mixed with frustration. In a recent discussion, Linus Tech Tips tackles this persistent narrative, addressing Microsoft's repeated pledges to significantly overhaul and improve the foundational aspects of Windows. This recurring theme resonates deeply within the builder community, who depend on a reliable and efficient operating system for their daily work, and any movement toward a more coherent and functional Windows environment is always met with intense scrutiny. The episode unpacks various aspects of Microsoft's promises, delving into the historical context of similar commitments and the practical implications for users and developers. The segment thoughtfully explores the challenges Microsoft faces in unifying divergent design philosophies, addressing long-standing performance issues, and integrating new features without introducing additional complexity. Particular attention is paid to the implications for enterprise deployments and the developer toolchain, where inconsistencies can lead to significant workflow disruptions and increased overhead. One notable point of discussion revolves around the perceived friction between consumer-facing features and the stability required by professionals, highlighting the delicate balancing act Microsoft attempts to perform. The hosts also touch upon specific UI/UX elements that have drawn criticism, pointing out areas where Microsoft’s iterative changes have arguably fallen short of fundamental improvements. The conversation subtly implies that, despite good intentions, the sheer momentum and legacy code within Windows make radical transformation an exceedingly difficult endeavor. For software, AI, and product builders, the core takeaway from this discussion is a reinforced understanding of the operating system as a product itself, subject to the same pressures of legacy, user expectation, and feature creep as any other large-scale software project. It serves as a reminder to approach development on any platform with a keen awareness of its underlying infrastructure's strengths and weaknesses, and to factor platform-specific quirks into architectural decisions, rather than relying solely on manufacturer promises.
#Hardware#DevRead brief →
- ARTICLEMIT Technology Review — AI · May 1, 2026
Musk v. Altman week 1: Elon Musk says he was duped, warns AI could kill us all, and admits that xAI distills OpenAI’s models
In an era increasingly defined by the swift and often contentious evolution of artificial intelligence, the narratives spun by its most prominent figures hold significant sway, shaping both public perception and the direction of fundamental research. When those narratives diverge, or indeed, contradict earlier positions, it becomes crucial for builders and strategists to discern the underlying currents. A recent report from MIT Technology Review sheds light on such a divergence, specifically concerning the interwoven histories and competing visions of two of AI’s most influential architects. The article unpacks the initial phase of the legal and public spat between Elon Musk and Sam Altman, centering on Musk's assertions regarding the foundation and trajectory of OpenAI. It details Musk’s claim that he was misled about OpenAI’s foundational non-profit mission, alleging a pivot away from its original open-source, humanity-first objective towards a profit-driven model. The piece further highlights Musk's stark warnings about the existential risks of uncontrolled AI development, a consistent theme in his public commentary. Crucially, the report also touches upon Musk’s candid acknowledgement that xAI, his own venture, is in some capacity drawing insights from or even distilling models developed by OpenAI, thereby presenting a complex picture of collaboration and competition. Among the specific details making this narrative compelling are Musk’s direct accusation of being "duped," a strong word choice from a figure rarely short on conviction, suggesting a profound personal betrayal in his eyes. His reiterated warning that AI possesses the potential to "kill us all" underscores the high stakes he perceives, pushing the existential risk agenda to the forefront of the public discourse. The admission about xAI’s operational approach, while not fully detailed, adds a layer of practical irony to the ideological dispute, hinting at the pervasive influence of leading AI models across the competitive landscape. For software, AI, and product builders, this evolving saga offers a salient reminder that the intellectual and commercial battlefronts of AI are deeply intertwined with the personal philosophies and strategic maneuvers of its key players. Understanding these dynamics is not merely about following celebrity tech culture; it helps contextualize the motivations behind open-source commitments, the ethical guardrails (or lack thereof) in commercial development, and the potential for regulatory intervention. Builders should consider how these high-profile disputes could influence access to foundational models, talent acquisition, and the very narrative surrounding AI’s societal impact, prompting a re-evaluation of ethical frameworks and strategic alliances in their own work.
#AIRead brief → - VIDEOTwo Minute Papers · May 1, 2026
Sakana AI’s God Simulator Is Brilliant
The digital world continues its rapid convergence with biological principles, a phenomenon increasingly visible in AI’s developmental trajectory. This intersection, often discussed in theoretical terms, has now been given a compelling demonstration by Sakana AI, whose “God Simulator” project, recently showcased by Two Minute Papers, offers a tangible glimpse into emergent intelligence and self-organization within artificial environments. It represents a significant step beyond traditional, static AI modeling, illustrating the dynamic capabilities that arise when agents are allowed to evolve within carefully designed ecosystems. The core of Sakana AI's work is a digital environment where AI agents operate under conditions mimicking natural selection and evolution. Instead of prescribing behavior from the outset, the researchers enable agents to discover strategies for survival and cooperation. These agents self-organize within a limited virtual ecosystem, demonstrating abilities such as resource gathering, communication, and even specialized roles appearing organically. One particularly striking observation is the spontaneous formation of complex social structures and division of labor among agents, a hallmark of evolved intelligence that emerges without explicit top-down programming. The system effectively runs a miniature, accelerated evolutionary process, revealing how simple rules can coalesce into sophisticated collective behaviors. This demonstration of emergent properties has profound implications for how we conceive of and build future AI systems. For software, AI, and product builders, the takeaway is clear: embracing evolutionary algorithms and ecological design principles in AI development could unlock unprecedented levels of adaptability and efficiency. Instead of attempting to hand-code every possible scenario, consider establishing robust environments where AI agents can learn, adapt, and innovate through trial and error, much like Sakana AI's simulator. This approach suggests a paradigm shift from prescriptive coding to facilitative environment design, potentially leading to more resilient and generally intelligent systems adaptable to unforeseen challenges.
#AIRead brief →
- ARTICLEMIT Technology Review — AI · May 1, 2026
Cyber-Insecurity in the AI Era
The accelerating integration of artificial intelligence into critical infrastructure and enterprise systems presents a contemporary paradox: a technology designed to enhance efficiency and security also introduces novel vulnerabilities. As organizations worldwide race to adopt AI solutions, the downstream implications for cybersecurity are rapidly evolving from theoretical concerns to immediate, pressing challenges. This shift demands a re-evaluation of established security paradigms and a proactive approach to understanding the unique attack surfaces AI creates. A recent MIT Technology Review piece dissects this emerging landscape, exploring how the very attributes that make AI powerful—its complexity, data dependencies, and autonomous decision-making—also expose new vectors for exploitation. The article delves into the potential for adversarial AI to manipulate data, poison models, or even subvert AI-powered defensive systems, essentially turning an organization's intelligence against itself. It highlights how the opaqueness of many AI models can complicate incident response, making it difficult to trace the origin of a compromise or fully understand its scope. The piece emphasizes the concept of "AI-native" threats, moving beyond traditional software vulnerabilities to encompass manipulation of training data, model inversion attacks to extract sensitive information, and evasion techniques that trick AI systems into misclassifying malicious inputs. One notable example discussed is the potential for small, imperceptible perturbations to images or audio to completely alter an AI's interpretation, posing significant risks in areas like autonomous vehicles or surveillance. It also touches upon the supply chain risks associated with pre-trained models and third-party AI services, where a compromise upstream could propagate silently through numerous deployments. For software, AI, and product builders, the central takeaway is the imperative to integrate security considerations throughout the entire AI lifecycle, from data collection and model training to deployment and continuous monitoring. This necessitates a shift from treating AI as a black box to understanding its internal mechanics, proactively identifying potential vulnerabilities, and developing robust validation and verification processes. Builders should consider adversarial testing as a standard practice, not an afterthought, and champion transparency in AI development to foster more resilient and trustworthy systems against threats that are no longer theoretical.
#AIRead brief → - ARTICLEMIT Technology Review — AI · May 1, 2026
Operationalizing AI for Scale and Sovereignty
As artificial intelligence increasingly moves from experimental deployments to critical infrastructure, the challenges of managing its lifecycle at scale become paramount. The prevailing conversation often focuses on model development or ethical implications, yet the practicalities of integrating AI into complex operational environments, particularly with an eye toward data governance and national interests, are now demanding equal, if not greater, attention. This shift underscores a maturation in the AI landscape, where theoretical possibilities are being confronted by real-world logistical and geopolitical necessities. A recent piece from MIT Technology Review examines this growing imperative, delving into the intricacies of operationalizing AI to achieve both widespread deployment and adherence to sovereign data requirements. The article highlights that scaling AI is not merely about compute power, but involves sophisticated orchestration of data pipelines, model versioning, and continuous monitoring within defined jurisdictional boundaries. It discusses how enterprises and governments alike are grappling with the need to maintain control over sensitive data and AI models, particularly as cross-border data flows become more common and regulatory frameworks like GDPR or evolving national data protection laws introduce new constraints. The underlying argument suggests that a robust operational framework is essential to navigate these dual demands effectively. The piece makes a compelling case by detailing how organizations are building custom AI infrastructures, sometimes even eschewing generalized cloud solutions, to meet specific sovereignty requirements. It cites examples where large-scale government or industrial AI projects are prioritizing on-premise or carefully localized cloud deployments to ensure data residency and minimize exposure to foreign legal jurisdictions. This often involves developing bespoke MLOps platforms that integrate with existing IT ecosystems, creating a complex, multi-layered approach to AI governance. The discussions underscore that data sovereignty is not just a regulatory hurdle, but a strategic consideration shaping architectural decisions and investment in AI technology. For a software, AI, or product builder, the core takeaway is the growing strategic importance of MLOps and infrastructure design beyond mere performance metrics. The confluence of scaling challenges and data sovereignty concerns necessitates a deeper understanding of regulatory landscapes and their impact on system architecture. Future AI initiatives will increasingly demand solutions that are not only efficient and accurate but also demonstrably compliant and resilient within specific legal and geographical contexts. It suggests exploring frameworks and tools that facilitate granular control over data provenance, model deployment locations, and real-time auditing capabilities for AI systems.
#AIRead brief → - VIDEOUnbox Therapy · May 1, 2026
DJI Keeps Going…
As wireless audio technology continues its rapid evolution, particularly in the prosumer and content creation spaces, understanding the nuances of new hardware releases becomes increasingly critical for developers and product builders tracking market trends. Unbox Therapy, in a recent video, takes a close look at the DJI Mic Mini 2, dissecting its features and implications for portable audio capture. The review positions this device within a competitive landscape, emphasizing how incremental improvements in miniaturization and connectivity shape user experience. The core of the video centers on a detailed examination of the DJI Mic Mini 2’s design and functionality. It highlights the device's compact footprint, particularly noting the integrated charging case that also serves as a pairing station, a design choice often seen in wireless earbuds now adapted for microphones. A key point of inquiry is the microphone's stated 14-hour battery life when combined with its charging case, suggesting an emphasis on extended use for creators on the go. The discussion also touches upon the 2.4 GHz digital transmission, which promises stable audio delivery over a specified range, a crucial factor for ensuring reliable recordings in varied environments. Further scrutiny is given to user-centric features, such as the direct USB-C and Lightning adapters that eliminate additional cables, streamlining the connection process to smartphones. This plug-and-play approach underscores a broader industry trend towards reducing setup complexity. The demonstration also covers specific audio quality tests, allowing viewers to assess fidelity in different scenarios, moving beyond marketing claims to practical performance. The video effectively communicates the product's value proposition through direct observation and real-world testing. For software, AI, and product builders, this review offers insights beyond just the hardware. The focus on seamless integration, extended battery life, and compact design signals critical user demands in portable tech. Consider how these trends in wireless audio, particularly concerning power efficiency and direct device compatibility, could inform future product roadmaps, whether designing companion applications, optimizing embedded AI for audio processing, or developing entirely new portable peripherals. Analyzing such market entries helps calibrate understanding of current user expectations and the direction of consumer electronics.
#Hardware#ProductRead brief →
- PODCASTHard Fork · May 1, 2026
OpenAI’s Big Reset + A.I. in the Doctor’s Office + Talkie, a pre-1930s LLM
The landscape of artificial intelligence is in constant flux, marked by rapid strategic shifts from its leading players and the emergence of specialized applications. Understanding these movements is critical for anyone building in the space, as they dictate not only technological trajectories but also market opportunities and ethical considerations. This week's Hard Fork episode delves into several pivotal developments that offer insight into the current state and near future of AI. The program examines OpenAI's recent announcements, including a refined partnership with Microsoft and a renewed focus on securing computational resources. The hosts dissect these changes, framing them within the context of OpenAI's ambitious growth strategy, ongoing legal disputes with figures like Elon Musk, and reported internal pressures concerning financial performance. This segment highlights the multifaceted challenges faced by a company at the forefront of AI development, balancing innovation with complex business and legal entanglements. Additionally, Dr. Adam Rodman, an internal medicine physician and assistant professor at Harvard Medical School, returns to discuss current applications of AI in clinical settings. He outlines how artificial intelligence is beginning to reshape diagnostic processes and patient treatment protocols, providing a grounded perspective on AI's impact in healthcare. Further expanding on AI's diverse applications, the episode introduces "Talkie," an experimental large language model trained exclusively on texts predating 1930. One of its creators discusses the project's ambition to explore whether such a historically constrained dataset can offer predictive capabilities or novel insights, demonstrating the continuous experimentation with LLM architectures and training data. This juxtaposes the cutting-edge commercial strategies of OpenAI with more academic, exploratory endeavors in AI. For software, AI, and product builders, this episode underscores the dynamic nature of the AI industry. The strategic maneuvering of a major player like OpenAI provides a blueprint for adapting to rapid technological evolution and market pressures. Dr. Rodman's insights offer a practical look at where AI is making tangible differences in critical sectors, and the "Talkie" project serves as a reminder to consider the foundational data and its implications for any AI system's capabilities and biases. The takeaway is to maintain a broad awareness of both commercial and exploratory AI initiatives, as each contributes to the overall direction of the field.
#AI#ProductRead brief →
- PODCASTa16z Podcast · May 1, 2026
Balaji and Taylor Lorenz on AI and Media
In an era where information integrity is increasingly challenged by both intentional obfuscation and the rapid proliferation of synthetic media, understanding the evolving dynamics between artificial intelligence and the media landscape is paramount for anyone building the next generation of digital products. The a16z Podcast recently brought together Balaji Srinivasan and Taylor Lorenz, in conversation with Theo Jaffee, for a nuanced discussion on how AI is fundamentally reshaping truth, trust, and communication online. This episode delves into a critical examination of the current information environment, where traditional gatekeepers contend with autonomous content generation and fragmented audiences. The conversation revisits ongoing tensions surrounding technology's influence on media power structures, building upon prior public disagreements between Lorenz and Srinivasan to frame a substantive dialogue. They explore the decline of established information systems and the rise of AI-generated content, contemplating the implications for verifying identity and truth in a digital space saturated with both authentic and artificial narratives. The episode highlights divergent perspectives on the future of media, contrasting concepts like decentralized "webs of trust" and cryptographic verification, championed by Srinivasan, with concerns for journalistic accountability and privacy, often voiced by Lorenz. The moderator Theo Jaffee adeptly navigates these contrasting viewpoints, drawing out the core implications for our increasingly AI-driven information ecosystem. A notable point of convergence, despite their differing frameworks, is the shared acknowledgement of the urgent need for new mechanisms to discern verifiable information. Srinivasan’s consistent call for cryptographic solutions to establish digital provenance, and Lorenz’s emphasis on the societal impact of misinformation, underscore a common threat to public discourse. The very structure of the conversation, featuring two figures known for their distinct perspectives on technology and media, serves to illustrate the complexity of the issues at hand, moving beyond typical echo chambers to expose the foundational challenges facing digital builders. For software, AI, and product builders, the central takeaway from this discussion is the imperative to embed robust mechanisms for truth and identity verification into their platforms from the earliest stages of development. Considering novel approaches to content provenance, exploring decentralized identity solutions, and building tools that enable critical discernment rather than simply optimizing for engagement, will be crucial. The insights shared by Lorenz and Srinivasan offer a vital framing for anticipating both the opportunities and the ethical responsibilities inherent in constructing the next wave of internet-native applications.
#AI#Product#DevRead brief →
- ARTICLEMIT Technology Review — AI · May 1, 2026
A new US phone network for Christians aims to block porn and gender-related content
As societal conversations around digital content moderation and algorithmic filtering intensify, a new initiative in the United States highlights the ongoing tension between individual access and curated online experiences. This development underscores how groups are increasingly seeking to exert control over the digital consumption of their communities, challenging established norms of internet openness. An MIT Technology Review article details a forthcoming cellular network designed specifically for Christian users. This service, launching on the T-Mobile network, aims to proactively block categories of content deemed inappropriate by its founders, including pornography and material related to gender identity. The article notes that this filtering extends beyond simple blacklists, suggesting an intention to implement more sophisticated AI-driven moderation to curate the digital landscape for its subscribers. This move represents a granular application of content restriction, moving beyond household-level parental controls to an infrastructure-level service offering. The announcement of this network raises significant questions for software, AI, and product builders. The core challenge lies in the algorithmic implementation of such broad content restrictions, particularly when dealing with nuanced or evolving topics like gender-related content. Builders should consider the ethical implications of designing and deploying systems capable of filtering at this scale, the potential for unintended censorship, and the technical complexities of reliably identifying and blocking subjective categories of information. Understanding how platforms and infrastructure providers may be increasingly called upon to accommodate or facilitate such filtered access is becoming a critical aspect of responsible technology development.
#AIRead brief → - PODCASTThe Vergecast · May 1, 2026
Elon Musk had a bad week in court
The convergence of ambition, technology, and legal maneuvering has placed the future trajectory of AI in the hands of courts and formidable personalities. The recent legal skirmish between Elon Musk and OpenAI’s Sam Altman serves as a potent reminder that the foundational ethos of artificial intelligence development remains a fiercely contested battleground. This latest episode of The Vergecast delves into the origins and implications of this high-profile lawsuit, exploring how a protracted disagreement concerning the direction and commercialization of AI has escalated into a formal legal challenge. The Verge’s Liz Lopatto unpacks the complex narrative, outlining how Musk's initial philanthropic vision for OpenAI clashed with its eventual commercial pivot under Altman's leadership. Despite widespread legal opinion suggesting Musk faces an uphill battle to win his case, Lopatto astutely observes that even a losing verdict could grant him a form of victory by forcing greater transparency or catalyzing a public re-evaluation of OpenAI’s governance. Separately, Sean Hollister provides a look at Framework’s modular computing advancements, particularly their latest laptop, which emphasizes user-upgradability and repairability—a counter-narrative to the prevailing trend of sealed-unit electronics. He also addresses the burgeoning interest in smaller form factor PCs, including devices like the Surface Go, pondering their potential resurgence in a market increasingly dominated by mobile devices. The legal entanglement between Musk and Altman underscores the critical need for developers to consider not only the technical feasibility of their innovations but also the long-term ethical frameworks, governance models, and corporate structures that will shape their impact. For product builders, the discussion around Framework’s approach offers a direct challenge to planned obsolescence, prompting thought on how truly modular and repairable designs could differentiate products in a competitive landscape. The evolving saga of AI’s direction, coupled with hardware discussions, highlights a broader tension between rapid innovation and sustainable, transparent development.
#Hardware#Product#AIRead brief →
- PODCASTWaveform: The MKBHD Podcast · May 1, 2026
Are These Apple’s Next Products?
In an era where the lines between content creation, product development, and audience engagement are increasingly blurred, understanding the mechanics of a successful media operation like MKBHD's becomes essential. This intersection reveals key insights into how sustained digital presence translates into a brand, a community, and ultimately, a product in itself. The Waveform podcast, known for its deep dives into consumer technology, recently offered a rare glimpse behind the curtain, addressing audience questions directly in a bonus episode that transcends typical Q&A formats. This particular episode, titled "Your 22 Questions for Waveform Answered!", deviates from the usual tech review and discussion format to focus squarely on listener inquiries. The hosts, including Marques Brownlee, Andrew Manganelli, and David Imel, fielded questions covering their creative process, equipment choices, and even personal insights that inform their professional output. They highlighted the challenge of sifting through over 1600 submitted questions, providing a tangible metric for the scale of their community interaction. One notable segment discussed their methodology for selecting topics and maintaining editorial independence amidst a rapidly evolving tech landscape, emphasizing a commitment to audience relevance over trending buzzwords. Another interesting revelation touched upon the iterative development of their video and podcast production workflows, detailing how feedback loops from their internal team and external audience shape their content strategy. The discussion also touched on the subtle but significant role of specialized roles within their team, moving beyond just on-screen talent to acknowledge the contributions of producers like Ellis. This emphasis on a holistic team effort underscores the complexity involved in producing high-quality content at scale. The candid responses peeled back layers on their approach to tech reviews, clarifying their criteria for evaluating new gadgets and software – an approach that prioritizes user experience beyond mere specifications. For software, AI, and product builders, this episode offers a valuable lesson in audience-centric design and the operational realities of building and maintaining a prominent digital platform. It emphasizes that truly engaging with an audience means not just broadcasting information, but actively soliciting and responding to feedback, turning passive consumption into active community participation. Consider how such direct engagement mechanisms could be integrated into your own product development lifecycle to foster a more invested user base and inform future iterations.
#Hardware#ProductRead brief →
§ Recommended hub
Where we get our signal.
YouTube channels
WatchMarques Brownlee
@mkbhd
Quality tech videos. Phones, EVs, smart home and the occasional studio behind-the-scenes.
Subscribe →Linus Tech Tips
@LinusTechTips
Deep PC builds, hardware reviews, and lab benchmarks.
Subscribe →Dave2D
@Dave2D
Crisp laptop and creator-rig reviews with strong opinions.
Subscribe →Unbox Therapy
@unboxtherapy
First looks at the gadgets everyone's about to be talking about.
Subscribe →Fireship
@Fireship
100-second explainers on every framework, AI tool and dev trend.
Subscribe →Lex Fridman
@lexfridman
Long-form conversations with AI researchers, founders and scientists.
Subscribe →Two Minute Papers
@TwoMinutePapers
Bite-sized walkthroughs of fresh AI and graphics papers.
Subscribe →Yannic Kilcher
@YannicKilcher
Paper deep-dives — what actually works under the hood.
Subscribe →Podcasts
ListenThe Vergecast
The Verge
The week in tech, hosted by Nilay Patel and the Verge editors.
Listen on Spotify →Waveform: The MKBHD Podcast
Marques Brownlee & Andrew Manganelli
MKBHD and team go long on the gadgets and stories of the week.
Listen on Spotify →TechStuff
iHeartPodcasts
How tech got here — accessible deep-dives on companies, gadgets and ideas.
Listen on Spotify →Lex Fridman Podcast
Lex Fridman
Long conversations with the people building AI, robotics and the future.
Listen on Spotify →a16z Podcast
Andreessen Horowitz
Conversations on tech, trends and the people building them.
Listen on Spotify →Hard Fork
Kevin Roose & Casey Newton (NYT)
The modern successor to Recode Decode — weekly takes on AI, crypto and the platforms.
Listen on Spotify →