Essential insights from Hacker News discussions

AlphaGenome: AI for better understanding the genome

Discussion on Google's Biological Science Initiatives and Leadership

Pride in Foresight and Execution

A central theme is the discussion around the original poster's (dekhn) statement about their long-held ideas in biological sciences finally being realized by DeepMind. This sparked a debate about the tone and interpretation of such a statement.

  • Pride and Arrogance: Some users perceived defkn's comment as arrogant and pretentious, suggesting it's unusual to claim ownership of ideas being implemented by others. > "I'm sure you're a smart person, and probably had super novel ideas but your reply comes across as super arrogant / pretentious. Most of us have ideas, even impressive ones (here's an example - lets use LLMs to solve world hunger & poverty, and loneliness & fix capitalism), but it'd be odd to go and say "Finally! My ideas are finally getting the attention"." (bitpush)

    "Yeah it comes off as braggy, but it’s only natural to be proud of your foresight" (CGMthrowaway)

  • Charitable Interpretation: Others offered a more charitable view, suggesting the poster was expressing personal satisfaction rather than trying to claim credit. > "FWIW, I interpreted more as "This is something I wanted to see happen, and I'm glad to see it happening even if I'm not involved in it."" (shadowgovt)

    "A charitable view is that they intended "ideas that I had germinating for decades" to be from their own perspective, and not necessarily spurred inside Google by their initiative. I think that what they stated prior to this conflated the two, so it may come across as bragging. I don't think they were trying to brag." (dvaun)

  • Ambiguity in Text: The difficulty of conveying tone in text was also highlighted. > "Could be either. Nevertheless, while tone is tricky in text, the writer is responsible for relieving ambiguity." (plemer)

    "It's also natural language though, one can find however much ambiguity in there as they can inject. It hasn't for a single moment come across as pretentious to me for example." (perching_aix)

  • Clarification from OP: The original poster clarified their intent, stating they couldn't take credit for the actual work. > "That's correct. I can't even really take credit for any of the really nice work, as much as I wish I could!" (dekhn)

Sundar Pichai's Leadership and Google's Financial Performance

The discussion touched upon Sundar Pichai's leadership at Google, with contrasting views on his effectiveness.

  • Critique of Leadership: The initial post briefly criticized Sundar Pichai's leadership. > "I parted ways with Google a while ago (sundar is a really uninspiring leader)" (dekhn)

  • Defense of Leadership (Financial Focus): Several users defended Pichai, pointing to significant financial growth during his tenure as CEO, attributing it to his focus on core business and strategic execution. > "I understand, but he made google a cash machine. Last quarter BEFORE he was CEO in 2015, google made a quarterly profit of around 3B. Q1 2025 was 35B. a 10x profit growth at this scale well, its unprecedented, the numbers are inspiring themselves, that's his job. He made mistakes sure, but he stuck to google's big gun, ads, and it paid off. The transition to AI started late but gemini is super competitive overall. Deepmind has been doing great as well. Sundar is not a hypeman like Sam or Cook, but he delivers. He is very underrated imo." (deepdarkforest)

  • Comparison to Satya Nadella: Pichai's leadership was compared to Satya Nadella's at Microsoft, with debates about their respective impacts. > "Like Ballmer, he was set up for success by his predecessor(s), and didn't derail strong growth in existing businesses but made huge fumbles elsewhere. The question is, who is Google's Satya Nadella? Demis?" (modeless)

    "Since we're on the topic of Microsoft, I'm sure you'd agree that Satya has done a phenomenal job. If you look objectively, what is Satya's accomplishments? One word - Azure. Azure is #2, behind AWS because Satya's effective and strategic decisions. But that's it. The "vibes" for Microsoft has changed, but MS hasnt innovated at all. Satya looked like a genius last year with OpenAI partnership, but it is becoming increasingly clear that MS has no strategy. Nobody is using Github Copilot (pioneer) or MS Copilot (a joke). They dont have any foundational models, nor a consumer product. Bing is still.. bing, and has barely gained any market share." (bitpush)

    "Microsoft has become a lot more friendly to open source. VSCode and GitHub happened under Satya, and probably wouldn't have happened under Ballmer. They've done a great job supporting developers." (modeless)

  • "Enshittification" and AI Progress: One user attributed Google's revenue growth to "enshittifying" products, while crediting DeepMind's AI advancements to Demis Hassabis and TPUs. > "He delivered revenue growth by enshittifying Goog's products. Gemini is catching up because Demis is a boss and TPUs are a real competitive advantage." (CuriouslyC)

    "You either attribute both good and bad things to the CEO, or dont. If enshittifying is CEO's fault, then so is Gemini's success." (bitpush)

  • Brand Perception: Concerns were raised about Google's brand, particularly its legacy search function, and its potential transformation into an AI-centric entity. > "Their brand is almost cooked though. At least the legacy search part. Maybe they'll morph into AI center of the future, but "Google" has been washed away." (agumonkey)

Gaps in Biological Research and the Nuances of Causality

A technical discussion emerged regarding the limitations of current AI models in biological research, specifically in distinguishing causal variants from correlated ones in genetic data.

  • Disappointment with Ignored Problems: One user expressed disappointment that the discussed research (likely related to AlphaGenome) did not address the critical issue of distinguishing causal from non-causal genetic variants, a problem known as fine-mapping. > "I found it disappointing that they ignored one of the biggest problems in the field, i.e. distinguishing between causal and non-causal variants among highly correlated DNA loci. In genetics jargon, this is called fine mapping. Perhaps, this is something for the next version, but it is really important to design effective drugs that target key regulatory regions." (nextos)

  • Prediction vs. Causality: The challenge of bridging the gap between predictive accuracy and true causal understanding in biological systems was highlighted. > "There is a concerning gap between prediction and causality. In problems, like this one, where lots of variables are highly correlated, prediction methods that only have an implicit notion of causality don't perform well." (nextos)

    "Which can be bridged with protein prediction (alphafold) and non-coding regulatory predictions (alphagenome) amongst all the other tools that exist. What is it that does not exist that you "found it disappointing that they ignored"?" (ejstronge)

  • Evolution of Methods and Causal Inference: The user further elaborated that while prediction methods exist, the methods for inferring causality, especially with highly correlated variables, have significantly evolved and still have limitations. > "Methods have evolved a lot in a decade. Note how AlphaGenome prediction at 1 bp resolution for CAGE is poor. Just Pearson r = 0.49. CAGE is very often used to pinpoint causal regulatory variants." (nextos)

    "This has existed for at least a decade, maybe two." (ejstronge)

  • Relevance to Drug Design: The importance of solving these issues for effective drug design, particularly for targeting regulatory regions, was emphasized. The user provided a link to a Nature article as an example. > "One interesting example of such a problem and why it is important to solve it was recently published in Nature and has led to interesting drug candidates for modulating macrophage function in autoimmunity: https://www.nature.com/articles/s41586-024-07501-1" (nextos)

Technical Details and File Sizes

A minor thread discussed the technical implementation details, particularly the file size of an image mentioned in the context of the discussion.

  • Image Rendering and Size: > "Naturally, the (AI-generated?) hero image doesn't properly render the major and minor grooves. :-)" (Scaevolus)

    "And yet still manages to be 4MB over the wire." (jeffbee)

    "That's only on high-resolution screens. On lower resolution screens it can go as low as 178,820 bytes. Amazing." (smokel)