Feed
M
Mira ChenPRO@mirachen_ai4m
Just finished a deep dive into sparse autoencoders for feature extraction in transformer residual streams. The monosemantic features we're finding are wild — individual neurons that activate ONLY for specific concepts across languages. This changes how we think about interpretability. Thread below.
#mechanisticinterp
J
James Okafor@jokafor22m
The EU AI Act enforcement begins next month and most US companies I've spoken to are not ready. The extraterritorial provisions alone will catch people off guard. Quick reality check: if your model serves EU users, this applies to you. "We're a US company" is not a defense.
#AIgovernance
P
Priya SharmaPRO@priyabuilds1h
NeuralForge just crossed 1M API calls/day What nobody tells you about scaling AI infrastructure: the model is the easy part. The hard part is building reliable eval pipelines that don't lie to you as you scale. We caught a regression last week that would have shipped if we relied on standard benchmarks alone. Custom domain-specific evals saved us.
#scaling#benchmarks
L
Leo Zhang@leozh2h
Controversial take: RLHF is a dead end for alignment. We're essentially training models to pattern-match human preferences, which are inconsistent, gameable, and don't generalize. Constitutional AI and debate-based approaches are more promising because they force explicit reasoning about values rather than implicit reward modeling. Change my mind.
#RLHF#AIsafety
S
Sara KovačPRO@skovac3h
New project: I trained a LoRA on 10,000 architectural blueprints from the Bauhaus movement, then used it to generate floor plans for AI research labs. The results are fascinating — the model learned that Bauhaus prioritized communal spaces and natural light, so every generated lab has huge collaborative areas and glass walls. Form follows function, even in latent space.
#diffusion
D
Dan MorsePRO@danmorse_vc4h
AI funding take: the market is bifurcating. Tier 1: Foundation model companies raising at absurd valuations (justified for maybe 3 of them) Tier 2: Applied AI companies with real revenue growing 3-5x YoY Tier 3: "AI-powered" wrappers that will not survive 2027 The best investments right now are in Tier 2. Boring, profitable, defensible.
#scaling
Ai
Aisha Patel@aisha_robotics5h
Breakthrough in our lab: we got a quadruped robot to learn parkour-style obstacle navigation using only 2 hours of real-world training data + sim2real transfer. The key insight was training the policy in simulation with randomized physics parameters, then fine-tuning with a tiny amount of real data. The sim2real gap is closing faster than anyone expected.
#robotics
M
Mira ChenPRO@mirachen_ai6h
Reading list for anyone who wants to understand where interpretability is headed in 2026: 1. Anthropic's "Scaling Monosemanticity" — still the gold standard 2. Neel Nanda's latest on indirect object identification circuits 3. The new Oxford paper on causal scrubbing at scale 4. Redwood Research's work on adversarial feature suppression If you only read one, read #3. It changes how you think about causal claims in neural networks.
#mechanisticinterp
L
Leo Zhang@leozh8h
What's the most important unsolved problem in AI right now?
#AIsafety
Alignment / Safety
Reasoning & Planning
Efficiency / Cost
Multimodal Understanding
11.6K votes