Physics-Informed Dynamic Mode Decomposition (PI-DMD) - YouTube
Minimizing the Expected Posterior Entropy Yields Optimal Summary Statistics - YouTube
[2302.03314] Federated Variational Inference Methods for Structured Latent Variable Models
Likelihood-free inference with deep Gaussian processes - ScienceDirect
Alexander Aushev: Sample-efficient inference for simulators: complex noise models and time-series - YouTube
GPT Everywhere: The Ultimate AI Assistant App for Seamless Integration with Your Files, Keyboard Shortcuts, Whisper AI, and Custom LLaMAs
josStorer/chatGPTBox: Integrating ChatGPT into your browser deeply, everything you need is here
Gaussian Splatting is pretty cool! · Aras' website
Ensemble Kalman filter based sequential Monte Carlo sampler for sequential Bayesian inference | Statistics and Computing
Publications - Oxford AI4Science Lab
[2207.03084] Pre-training helps Bayesian optimization too
[1811.09558] Regret bounds for meta Bayesian optimization with an unknown Gaussian process prior
Self-Repellent Random Walks on General Graphs - Achieving Minimal Sampling Variance via Nonlinear Markov Chains | OpenReview
[2302.02947] GPS++: Reviving the Art of Message Passing for Molecular Property Prediction
[2206.10591] Can Foundation Models Talk Causality?
[1909.02736] A review of Approximate Bayesian Computation methods via density estimation: inference for simulator-models
Dignity of risk
Large Language Models as General Pattern Machines
We observe that pre-trained large language models (LLMs) are capable of autoregressively completing complex token sequences—from arbitrary ones procedurally generated by probabilistic context-free grammars (PCFG), to more rich spatial patterns found in the Abstract Reasoning Corpus (ARC), a general AI benchmark, prompted in the style of ASCII art.
Surprisingly, pattern completion proficiency can be partially retained even when the sequences are expressed using tokens randomly sampled from the vocabulary.
These results suggest that without any additional training, LLMs can serve as general sequence modelers, driven by in-context learning.
In this work, we investigate how these zero-shot capabilities may be applied to problems in robotics—from extrapolating sequences of numbers that represent states over time to complete simple motions, to least-to-most prompting of reward-conditioned trajectories that can discover and represent closed-loop policies (e.g., a stabilizing controller for CartPole).
While difficult to deploy today for real systems due to latency, context size limitations, and compute costs, the approach of using LLMs to drive low-level control may provide an exciting glimpse into how the patterns among words could be transferred to actions.
Why We Can’t Have Nice Things – Ben Landau-Taylor
FP2: Fully In-Place Functional Programming provides memory reuse for pure functional programs “Welcome to Koka – a strongly typed functional-style language with effect types and handlers.”
The Koka Programming Language
Taylor Lorenz, Julia Allison Was the First Online Influencer and Was Vilified for It
Large language models, explained with a minimum of math and jargon
GLP-1 agonists: Diabetes drugs and weight loss - Mayo Clinic
More Jean Ignace Isidore Gérard Grandville! I am obsessed!
LLM now provides tools for working with embeddings
CodaLab Competitions: An Open Source Platform to Organize Scientific Challenges
The Comfort Trap: Why the Pursuit of an Easier Life Creates a Harder One (And What to Do Instead). – Mayo Oshin
Sub-pixel Distance Transform — Acko.net
15-850: CMU Advanced Algorithms, Fall 2020
Transportation Executive Summary — RethinkX (I have questions about how they imagine they can handle rush hour)
Science Links and Thoughts (August '23)
Home · electro-smith/DaisyWiki Wiki
Waterworks of Money
Pushover: Simple Notifications for Android, iPhone, iPad, and Desktop
Arne Hallam’s Home Page includes some excellent lectures and tutorials on statistics
[2306.15924] The curse of dimensionality in operator learning
Half-Truths (at Best) about Calculus of Variations and Optimal Control
ML Blog - Improve ChatGPT with Knowledge Graphs
Notes on it’s so over/we’re so back - by Max Read
Outlook Integration - Sunsama
Cat and Girl, You monetized your social contacts? Monster
Louis Tiao, Spherical Inducing Features for Orthogonally-Decoupled Gaussian Processes
“VC qanon” and the radicalization of the tech tycoons - Anil Dash
Artificial intelligence and the end of the human era - New Statesman
Scope of Work, Notes, 2023-07-10 , on innovation
In his 1975 memoir The Periodic Table, Primo Levi recounts a brief anecdote about a mysterious slice of onion in a recipe for oil varnish. Levi, an Italian-Jewish writer, chemist, Holocaust survivor, and anti-Fascist, was working in a paint factory after the war. He came across a varnish formula, published in 1942, that included two slices of onion added to the boiling linseed oil near the end of the process. Why onion? After talking to a mentor, he learned that in the days before thermometers were common, slices of raw onion were used to gauge the temperature of the oil. The onion remained in the recipe long after its usefulness had ended, and “what had been a crude measuring operation had lost its significance and was transformed into a mysterious and magical practice”.
“The onion in the varnish” has since become a popular metaphor among computer scientists, entrepreneurs, and rationalist types – a shorthand for the importance of eliminating inessential elements that creep into a process. However, that’s not exactly what Levi describes, and it isn’t the moral of his story. His onion anecdote is told in the context of a longer conversation about the ways an ancient process like varnish manufacturing “retains in its crannies … rudiments of customs and procedures abandoned for a long time now”. This accretion isn’t a liability or a problem per se, but rather an inevitability as processes and cultures evolve over a long duration.
A relatively small amount of force applied at just the right place Y-Combinator back stories
Mobility — Lydia Kiesling
Office for the Preservation of Normalcy - I feel confident enough to post these now. A...
The feeling of something waiting there for you
Bolo ties — Betty Musgrove
Why haven’t internet creators become superstars?
Smart Countdown Timer is a good time.
The internet is for 12-year-olds - by Max Read
Calculating Sunflower Oil Production (ChatGPT psychosis)
The Dissemination Game: Incentives of In-Person vs Virtual Participation
Der Klang der Familie
The Remarkable Decline of Homophobia – Probably Overthinking It
Bellingcat’s Online Investigation Toolkit [bit.ly/bcattools] - Google Sheets
‘The last good website’ - Columbia Journalism Review
Émile P Torres, Longtermism poses a real threat to humanity has a utilitarianism-is-weird-plus-looks-culty critique of longtermism:
When I was a longtermist, I didn’t think much about the potential dangers of this ideology. However, the more I studied utopian movements that became violent, the more I was struck by two ingredients at the heart of such movements. The first was – of course – a utopian vision of the future, which believers see as containing infinite, or at least astronomical, amounts of value.
The second was a broadly “utilitarian” mode of moral reasoning, which is to say the kind of means-ends reasoning above. The ends can sometimes justify the means, especially when the ends are a magical world full of immortal beings awash in “surpassing bliss and delight”, to quote Bostrom’s 2020 “Letter from Utopia”.
LATExml A LATEX to XML/HTML/MathML Converter
BookML: automated LaTeX to bookdown-style HTML and SCORM, powered by LaTeXML
Matt Bruenig, Equality and Equity
“Equity” is not used to promote any particular unit of equality — whether outcomes, opportunities, boxes, sightlines, luck-adjusted outcomes, primary goods, income, wealth, or capabilities — but is instead a word that you invoke any time you object to the unit of equality someone else is using, regardless of what, if any, your preferred alternative unit of equality is.
Tang ping/ Lying flat
Waterworks of Money
PRIV-Creation/Awesome-Diffusion-Personalization: A collection of resources on personalization with diffusion models.
2023-We See The Sacred From Afar, To See It The Same
Agenda - Notes meets Calendar
Agenda’s unique approach of organizing notes into a timeline helps to drive your projects forward.
While other apps focus specifically on the past, present, or future, Agenda is the only note taking app that tracks them all at once, giving you the complete picture.
ELI5: FlashAttention. Step by step explanation of how one of…
firewire on modern macs
neverdonebefore.org – The Facilitators' R&D Department
[2306.12672] From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
What’s causing Australia’s nightmare rental market? - Podcast
Revolut trades crypto stocks and foreign currencies while having a debit card and disposable credit cards for privacy preservation.
[1906.04358] Weight Agnostic Neural Networks
Artificial Communication: How Algorithms Produce Social Intelligence
The first AI model based on Yann LeCun’s vision for more human-like AI/ pgeo [2301.08243] Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture
Love, Actually: The science behind lust, attraction, and companionship
Stanford’s War on Social Life
Why Civilization Is Older Than We Thought
Cadence Culture – Magazine for international underground electronic music culture
Contra Marc Andreessen on AI - by Dwarkesh Patel
Bruce Schneier, On the Need for an AI Public Option
BIMLOGIQ is where Amir Dezfouli went to work.
Physics & Equality Constrained Artificial Constrained Artificial Neural Networks | A physics-informed, noise-aware and equality constrained artificial neural network framework for the solution of forward/inverse problems with multi-fidelity data
Wee-Sun Lee on GNNs
How GNNs and Symmetries can help to solve PDEs - Max Welling
Thomas Minka, From automatic differentiation to message passing/ Slides
SPIGM @ ICML
MLRS 2023 - Schedule
DLAI - Learning Platform Beta
XPRIZE Wildfire | XPRIZE Foundation* Gretton lecture4_introToRKHS.pdf
State of GPT | BRK216HFS
AI Alignment Curriculum — AGI Safety Fundamentals
An Insider’s Guide to “Anti-Disinformation”
Ecosystem Graphs for Foundation Models
Dignity of risk - Wikipedia
Flyte: An Open Source Orchestrator for ML/AI Workflows - The New Stack
Build production-grade data and ML workflows, hassle-free with Flyte* Cloud-Native Geospatial Foundation
The Cloud-Native Geospatial Foundation is a forthcoming initiative from Radiant Earth created to increase adoption of highly efficient approaches to working with geospatial data in public cloud environments.
Radar Trends to Watch: May 2023 – O’Reilly
Stochastic Differential Equations
Mesa-Optimization - AI Alignment Forum
fast.ai - Mojo may be the biggest programming language advance in decades
Nostr, a simple protocol for decentralizing social media that has a chance of working
Bluesky Social/ Bluesky
Lilian Weng’s updated The Transformer Family Version 2.0
How does in-context learning work? A framework for understanding the differences from traditional supervised learning | SAIL Blog
Sam Kriss, in All the nerds are dead, conflates geeks and nerds, but is funny anyway
The General Theory of Employment, Interest and Money by John Maynard Keynes
The Practical Guides for Large Language Models
The reasonable(?) effectiveness of data analysis
Why is it that we can be thrown into the work of other people, in a field we have zero experience in, and have any expectation of making any useful impact at all? When stated objectively, it sounds utterly ridiculous. But in my experience, a data team can find something to make an improvement on, even if the impact can sometimes be small.
Tackling Collaboration Challenges in the Development of ML-Enabled Systems
“I highlight the findings of a study on which I teamed up with colleagues Nadia Nahar (who led this work as part of her PhD studies at Carnegie Mellon University and Christian Kästner (also from Carnegie Mellon University) and Shurui Zhou (of the University of Toronto).The study sought to identify collaboration challenges common to the development of ML-enabled systems.
Through interviews conducted with numerous individuals engaged in the development of ML-enabled systems, we sought to answer our primary research question: What are the collaboration points and corresponding challenges between data scientists and engineers? We also examined the effect of various development environments on these projects.
Based on this analysis, we developed preliminary recommendations for addressing the collaboration challenges reported by our interviewees.”
Software²: A new generation of AIs that become increasingly general by producing their own training data
Probability Is Not A Substitute For Reasoning – Ben Landau-Taylor
Self-Healing Concrete: What Ancient Roman Concrete Can Teach Us
Differentiating the discrete: Automatic Differentiation meets Integer Optimization | μβ
Information Transfer Economics: Organization of information equilibrium concepts
Underrated ideas in psychology - The Seeds of Science
How to train your own ChatGPT Alpaca style, part one
Serge Zaitsev, World’s smallest office suite
Annie Lowrey, We Haven’t Been Measuring How the Economy Really Works
Why Your Polyamorous Friend’s Relationship Sucks—by Aella
Alternative to the tedious openhub workflow: analyzemyrepo.com | about
TIL Apophenia vs Pareidolia
How to stylize images using Stable Diffusion AI
Matthew Feeney, Markets in fact-checking
Étienne Fortier-Dubois, The elements of scientific style
Jason Collins, We don’t have a hundred biases, we have the wrong model
If there are already smarter people around, how can I find good ideas?
Fear, Rage, and Anguish on America’s Happiest Campus
The Giving Tree Alternate Ending
The ‘Enshittification’ of TikTok
CSCI 601.771: Self-supervised Models
Colossal-AI is designed to be a unified system to provide an integrated set of training skills and utilities to the user. You can find the common training utilities such as mixed precision training and gradient accumulation. Besides, we provide an array of parallelism including data, tensor and pipeline parallelism. We optimize tensor parallelism with different multi-dimensional distributed matrix-matrix multiplication algorithm. We also provided different pipeline parallelism methods to allow the user to scale their model across nodes efficiently. More advanced features such as offloading can be found in this tutorial documentation in detail as well.
Neural Transducer Training: Reduced Memory Consumption with Sample-wise Computation
Geometry in Text-to-Image Diffusion Models
Stable Diffusion with Core ML on Apple Silicon
Building Resilient Organizations: Toward Joy and Durable Power in a Time of Crisis
Talking to a Person
Is Anything Worth Maximizing? How metrics shape markets, how we’re… | by Joe Edelman
Values-Based Social Design
Something something kernels, something regression something interaction effects.
Journey Mapping 101
Facebook, Google Give Police Data to Prosecute Abortion Seekers
Rest of World - Reporting Global Tech Stories
Cleanlab: “We publish research, develop open source tools, and design interfaces to help you improve the quality of your datasets and diagnose various issues in them.”
See their blog e.g. ActiveLab: Active Learning with Data Re-Labeling
How does in-context learning work? A framework for understanding the differences from traditional supervised learning
TL; DR—In-context learning is a mysterious emergent behavior in large language models (LMs) where the LM performs a task just by conditioning on input-output examples, without optimizing any parameters.
In this post, we provide a Bayesian inference framework for understanding in-context learning as “locating” latent concepts the LM has acquired from pretraining data.
This suggests that all components of the prompt (inputs, outputs, formatting, and the input-output mapping) can provide information for inferring the latent concept.
We connect this framework to empirical evidence where in-context learning still works when provided training examples with random outputs.
While output randomization cripples traditional supervised learning algorithms, it only removes one source of information for Bayesian inference (the input-output mapping).
Bayesian Neural Networks by Duvenaud’s team
Rohit, People always put their money in futures they predict
What have we seen so far? People didn’t use to have much disposable income to invest a century ago. When they did, or rather those who did, invested their savings mostly in land or (if they were rich enough) businesses, or commodities.
Where should I invest my money is a relatively old question, but until recently it wasn’t a very interesting question. This is because until recently the answers were understood, but not that actionable. The futures would get better, things would get built, and you could ride optimism as a thesis if you could find a way how. The avenues available were extremely limited, and the optionality you had was minimal.
What Are the Different Approaches for Detecting Content Generated by LLMs Such As ChatGPT? And How Do They Work and Differ?
Don’t guess what’s true: choose what’s optimal. A probability transducer for machine-learning classifiers
How to wrap up research projects gracefully
What’s the difference between a tutorial and how-to guide? - Diátaxis
Career Sponsorship Is a Two-Way Street
Making Friends with Machine Learning
Bing: “I will not harm you unless you harm me first”
From Bing to Sydney
Instagram, TikTok, and the Three Trends
the company correctly intuited a significant gap between its users stated preference — no News Feed — and their revealed preference, which was that they liked News Feed quite a bit. The next fifteen years would prove the company right.
Kedro | A Python framework for creating data science code /Kedro Frequently asked questions. Kedro rationale by Joel Schwarzmann: The importance of layered thinking in data engineering
Prof Steve Keen | Creating realistic economics for the post-crash world
Welcome to LangChain — 🦜🔗
Large language models (LLMs) are emerging as a transformative technology, enabling developers to build applications that they previously could not. But using these LLMs in isolation is often not enough to create a truly powerful app - the real power comes when you are able to combine them with other sources of computation or knowledge.
This library is aimed at assisting in the development of those types of applications.
How did places like Bell Labs know how to ask the right questions?
Color Oracle simulates color blindness for accessibility of visualisations and plots etc
darrenjw/fp-ssc-course: An introduction to functional programming for scalable statistical computing
White Collar Crime Risk Zones
Do organizations have to get slower as they grow? (with Alex Komoroske)
Clara in Blunderland. With forty illus. by S.R : Lewis, Caroline, pseud : Free Download, Borrow, and Streaming : Internet Archive
Kolibri User Guide
Kolibri is an open-source educational platform specially designed to provide offline access to a wide range of quality, openly licensed educational resources in low-resource contexts like rural schools, refugee camps, orphanages, and also in non-formal school programs.
Team Silverblue — About packaged apps for Fedora
The Adaptable Linux Platform Guide PAckaed apps for SUSE.
Trick Facial Recognition Software into Thinking You’re a Zebra or Giraffe with These Pyschedelic Garments
Taylor expansion with integral remainder
The Carr–Madan formula is really just a special case of a Taylor expansion. For completeness, let’s rederive the Taylor expansion with an integral remainder.
When explaining becomes a sin—by Tom Stafford file under taboos and Tetlock and compassion/comprehension
Jordan Peterson’s zombie climate ideas
An Uncontroversial Guide to Being Controversial
Markets Are Kinda Fake—by Judah—Be Wrong
How TikTok changed the world in 2020 - BBC Culture
Cult Classic ’Fight Club’ Gets a Very Different Ending in China
A Turkish Farmer Tests Out VR Goggles on Cows To Get More Milk
How to buy a social network, with Tumblr CEO Matt Mullenweg
Pluralistic: Tiktok’s enshittification (21 Jan 2023) – Pluralistic: Daily links from Cory Doctorow
Stop Talking to Each Other and Start Buying Things: Three Decades of Survival in the Desert of Social Media
Adjacent Possible—Steven Johnson
Pluralistic: EU to Facebook, ’Drop Dead’ (07 Dec 2022) – Pluralistic: Daily links from Cory Doctorow
In Which Long-Time Netizen & Programmer-at-Arms Dave Winer Records a Podcast for Me, Personally
Þe Forlorn Hope Þt Was Vox.com, & BRIEFLY NOTED
Against Cop Shit—Jeffrey Moro
Supervised Training of Conditional Monge Maps—Apple Machine Learning Research
How To Be an Academic Hyper-Producer—Economics from the Top Down
Password Generator—Strong & Random Password Generator
A global analysis of matches and mismatches between human genetic and linguistic histories—PNAS
Building A Virtual Machine inside ChatGPT
Desmos—Let’s learn together. graphing calculator online
Leaving Your Mark on The World—a free interactive tool to help you do more good! | ClearerThinking.org
LIFTOFF: Couch to Barbell
The Cause of Depression Is Probably Not What You Think
What Monks Can Teach Us About Paying Attention
Archive—The Common Reader
Publish your site in 5 minutes, no code required. Host on your own domain. Write once, share everywhere.
Actually, Japan has changed a lot—by Noah Smith — japanese real estate is surprsising
Make Work Better—Bruce Daisley
One Useful Thing (And Also Some Other Things) | Ethan Mollick
How to… use ChatGPT to boost your writing
How America Lost the Atomic Age
Democracy: forking the project—Nicholas Gruen
Casey Johnston on Turning Weight-lifting Into a Business
The radical idea that people aren’t stupid paired with How to achieve self-control without “self-control”
Colonialism did not cause the Indian famines
These works suggest a better theory of why the famines happened. The capacity of the states and the markets to provide food and water to the needy was small against the scale of the natural disasters. All large natural disasters reveal such a syndrome. They show that the capacity of the people in charge of relief can be constrained by poor information, distorted information, limited money, limited knowledge of causation, and conflict among stakeholders.
I do see why we cannot have both as causes, though.
Orwell Was Right
Erik van Zwet, Shrinkage Trilogy Explainer on modelling the publication process
Mathematics of the impossible: Computational Complexity—Thoughts
Inclusive Scientific Meetings — 500 Women Scientists
Download the Atkinson Hyperlegible Font—Braille Institute
What makes it different from traditional typography design is that it focuses on letterform distinction to increase character recognition, ultimately improving readability. We are making it free for anyone to use!
Low-Rank Approximation Toolbox: Nyström Approximation—Ethan Epperly
-ise or-ize? Is-ize American? (1/3) – Jeremy Butterfield Editorial
Introducing Massively Open Online Papers (MOOPs) | KULA: Knowledge Creation, Dissemination, and Preservation Studies
The Australian academic STEMM workplace post-COVID: a picture of disarray
Why Everything at Walgreens Is Suddenly Behind Plastic
Merve Emre, Has Academia Ruined Literary Criticism?
Matt Clancy, Age and the Nature of Innovation “Are there some kinds of discoveries that are easier to make when young, and some that are easier to make when older”?
Tom Stafford, Microarguments and macrodecisions
Kevin Munger, Why I am (Still) a Conservative (For Now)
Kevin Munger, Facebook is Other People
Randy Au, in Data science has a tool obsession talks about Gear Acquisition Syndrome for data scientists.
Clive Thompson, The Power of Indulging Your Weird, Offbeat Obsessions
Niche Museums: Find tiny museums near you
You Don’t Know How Bad the Pizza Box Is
Would a bust be all bad for San Francisco?
omg.lol - A lovable web page and email address, just for you
A Writer Used AI To Plagiarize Me. Now What?
karpathy/nanoGPT: The simplest, fastest repository for training/finetuning medium-sized GPTs.
What is the “forward-forward” algorithm, Geoffrey Hinton’s new AI technique?
Fission: Build the future of web apps at the edge incubates several decentralized protocols
A debugging manifesto
RIP Passwords? Passkey support rolls out to Chrome stable
danah boyd, What if failure is the plan?. I’ve been thinking a lot about failure…
Lina Yao’s Homepage
The Architecture of Open Source Applications
Why Not Mars (Idle Words)
Michael Nielsen on science online
Great bloggers are rare, weird, and not team players – Kevin Drum
Swayable: RCTs for marketing campaigns via ingenious audience recruiting network
How Dobbs Triggered a ‘Vasectomy Revolution’
Zoomers Co-Working Community (co-working for accountability)
Lattice Boltzmann methods
Normconf Lightning Talks/Normconf: The Normcore Tech Conference — a conference on the stuff that we actually need to do in ML, as opp. the stuff we would like to pretend is what we do.
Jean Gallier and Jocelyn Quaintance , Algebra, Topology, Differential Calculus, and Optimization Theory for Computer Science and Machine Learning, 2188 pages as of 2022/10/30, and growing.
MINDMELT - Music Visualizer
Terence Eden, You can have user accounts without needing to manage user accounts
Adam Mastroianni, Ludwin-Peery, EJ, Things could be better
Adam Mastroianni, The great myths of political hatred
AI & Music Deep Dive Pt 2: Is It Game Over For Musicians?
Big correlations and big interactions ([2105.13445] The piranha problem: Large effects swimming in a small pond)
Blower Door Test—What’s the Cost & Who Does Them?
causalscience.org aims to bring academia and industry together to advance causal inference in practice.
Coasean Floor and Ceiling
How to keep cakes moist and cause the greatest tragedies of the 20th century
Supervised Training of Conditional Monge Maps
Focus Is Saying No To Good Ideas
LD_LIBRARY_PATH considered harmful | Georg’s Log
Forecasting commodity returns by exploiting climate model forecasts of the El Niño Southern Oscillation
George Ho, How to Improve Your Static Site’s Typography (for code formatting)
Invasive Diffusion: How one unwilling illustrator found herself turned into an AI model
Lagrange dual, weak duality and strong duality
Quasi-Newton methods: L-BFGS
Microsoft CSR’s Law Enforcement Request Report is disconcerting transparency
Marc ten Bosch, Let’s remove Quaternions from every 3D Engine (An Interactive Introduction to Rotors from Geometric Algebra)
Michele Coscia, Meritocracy vs Topocracy
Public-facing Censorship Is Safety Theater, Causing Reputational Damage
Ti John’s Publications
Reverse engineering the NTK: towards first-principles architecture design—The Berkeley Artificial Intelligence Research Blog
GPflow/GeometricKernels: Geometric kernels on manifolds, meshes and graphs
Riemannian Score-Based Generative Modelling/ oxcsml/riemannian-score-sde: Score-based generative models for compact manifolds
SNStatComp/awesome-official-statistics-software: An awesome list of statistical software for creating and accessing official statistics
Symmetry and Physics—Not Even Wrong
Tom Pendergast’s workplace surveillance novel
Treehugger Introduces a Modern Pyramid of Energy Conservation
Unbiased MCMC with couplings
A Visit to the Idea Machine Fair
Interintellect’s Upcoming Salons
An Interintellect salon is an evening-length conversation (typically one to three hours) around a specific topic, carrying the atmosphere of a cozy, living room gathering.
Vast.ai “Rent Cloud GPU Servers for Deep Learning and AI”
Vibecamp Community Values
Adam Mastroianni, Things could be better
You Aren’t Learning If You Don’t Close the Loops
Christian Lawson-Perfect’s Interesting Esoterica is a collection of weird papers in maths.
Erik Hoel, Why do most popular science books suck?
Étienne Fortier-Dubois, The Vibes Are Off
Peter Woit, Symmetry and Physics
24: Monasteries - Monomythical
Oshan Jarow, Markets Underinvest In Vitality
Judah, A Speech I would Like To Give Undergrads
Spark the Mood You Want
Spirals of Delusion: How AI Distorts Decision-Making and Makes Dictators More Dangerous (not convinced tbh)
The Good Research Code Handbook
Schelling fences on slippery slopes
The Developer Certificate of Origin is a great alternative to a CLA
Scaling Factors for Hidden Markov Models
roboticcam/machine-learning-notes: My continuously updated Machine Learning, Probabilistic Models and Deep Learning notes and demos (2000+ slides) 我不间断更新的机器学习，概率模型和深度学习的讲义(2000+页)和视频链接
The Social Capital Atlas
Computational Research Platform on Code Ocean
wesselb/varz: Painless optimisation of constrained variables in AutoGrad, TensorFlow, PyTorch, and JAX
I. Risk Management Foundations - Machine Learning for Financial Risk Management with Python [Book]
jkbren/einet: Uncertainty and causal emergence in complex networks
[2206.13637] Utility Theory for Sequential Decision Making
[2207.06544] Volatility Based Kernels and Moving Average Means for Accurate Forecasting with Gaussian Processes
Darren Wilkinson’s Bayesian inference for a logistic regression model 1, 2, 3, 4, 5
Book Review: Public Choice Theory And The Illusion Of Grand Strategy
Douglas Murray’s War on the West—A Review
Stephen Malina — Deriving the front-door criterion with the do-calculus
Cat and Girl on Distraction
Your Daily AI Research tl;dr
Census is a tool which links all the weird different data storage systems and CRM stuff
The Political Is Personal
Nemanja Rakicevic, NeurIPS Conference: Historical Data Analysis
Yanir Seroussi, The mission matters: Moving to climate tech as a data scientist
Keir Bradwell, #1: In-group Cheems
AAAI2022: Presidential Address: The State of AI
Samuel Moore, Why open science is primarily a labour issue
Adam Mastroianni, Against All Applications
Have The Effective Altruists And Rationalists Brainwashed Me?
Predicting Generalization using GANs
Anthony Lee Zhang, The War for Eyeballs
There is thus an interesting analogy between control rights for Twitter and other social media platforms, and the recent “Curve wars” in web3. Eyeball space in social media is like liquidity in web3: everyone values it and everyone wants to control it. Curve is thus similar to Twitter, in the sense that it controls a resource — incentivized liquidity provision — which is more valuable than the profits CRV extracts from providing the resource. As a result, many parties find it in their interest to buy control rights over CRV/CVX, and run it in a purposefully non-profit-maximizing way. In web3, large protocols amass piles of CRV/CVS governance tokens, to redirect liquidity towards their own tokens. Again, the fundamental principle behind the Curve wars is that the liquidity that Curve controls is much more valuable to some market participants, than the potential profits Curve generates using that liquidity.
My thesis is thus that Twitter and similar platforms are, in some sense, doomed to exist in perpetual governance conflicts similar to the Curve wars. Market forces will not allow Twitter and similar companies to exist as independent, reasonably objective, profit-maximizing company. Since the eyeball time rents that Twitter controls are vastly larger than the profits it generates from those rents, Twitter is essentially doomed to be locked in a endless governance war. Interested forces will fight endlessly for control over Twitter, to run Twitter in a purposefully non-profit-maximizing way, to channel eyeballs towards one’s desired objective. Parties which value eyeball time for various reasons will endlessly struggle for control over Twitter, not for its ad profits, but to funnel eyeball time, in a purposefully non-profit-maximizing way, towards causes that they value.
Field Guide to the Curve Wars: DeFi’s Fight for Liquidity
The text-to-image revolution, explained
Digital artists’ post-bubble hopes for NFTs don’t need a blockchain
Generative Flow Networks - Yoshua Bengio
Flow Network based Generative Models for Non-Iterative Diverse Candidate Generation
I Do Not Think It Means What You Think It Means: Artificial Intelligence, Cognitive Work & Scale
ClearerThinking.org’s courses, e.g.
Bayesian Neural Nets and how to effectively train them with Stochastic Gradient Markov Chain Monte Carlo | Simen Eide
Cautionary Tales from Cryptoland
PJ Vogt, Selling Drugs to Buy Crypto
Crypto, NFTs, Web3: People Hate the Future of the Internet
TODOs from Paper Systems
To make you feel pride rather than stress or shame, the ideal features of a todo system are something like:
- Accomplishments accumulate
- Long-term scope to see the arc of your success
- Multiple levels of scope to get sense of reward at multiple scales
- Recognize that tasks and events all compete for one resource --- time
- Limit your daily tasks and get "Bonus Time"
- Clear visual families & dependencies, probably through spatial organization
Michele Coscia, Pearson Correlations for Networks
External Music Services - ListenBrainz
The DAIR Institute “The Distributed AI Research Institute is a space for independent, community-rooted AI research, free from Big Tech’s pervasive influence.”
Dongkong track breakdown
Machine Learning Trick of the Day (1): Replica Trick— Shakir Mohammed
Machine Learning Trick of the Day (7): Density Ratio Trick— Shakir Mohammed
The Paradox of Choice in Computing-Research Conferences
ApplyingML - Papers, Guides, and Interviews with ML practitioners
Learning About the Long Run
Milan Cvitkovic — Things you are allowed to do
Information loss and entropy
Useful inequalities cheat sheet
[2111.00110] FC2T2: The Fast Continuous Convolutional Taylor Transform with Applications in Vision and Graphics
The local minima of suckiness · Vicki Boykis
A from-scratch tour of Bitcoin in Python
The Art of Documentation – Chelsea Troy
Ryan Broderick, We were the unpaid janitors of a bloated tech monopoly
Mark Zuckerberg Is TNR’s 2021 Scoundrel of the Year
Facebook Is an Authoritarian State
Why Facebook won’t let you turn off its news feed algorithm
How to incorporate observation weights into an estimator
Convex hull of zeros
How should we compare neural network representations?
Marden’s theorem on complex roots
SimSWE 4: Wants, needs, and chasm-crossing - apenwarr
Willingness to look stupid
The two sides of envy at work
microsoft/hummingbird: Hummingbird compiles trained ML models into tensor computation for faster inference.
Maxwell’s Relations (Part 1)
The Analytical Edge
Weekly Research — Verdad
Sinkhorn and circular law
ARiMA is not Sufficient
fastdownload: the magic behind one of the famous 4 lines of code
Steven Buss, Politics for Software Engineers, Part 1, Part 3
Schneier, When AIs Start Hacking
Conceptual Foundations of Statistical Learning, Spring 2021
Big Tech’s guide to talking about AI ethics
Blackwell Approachability - Que-sais je?
What Is a Companion Matrix? – Nick Higham
Multimodal Neurons in Artificial Neural Networks/ Distill version of Multimodal Neurons in Artificial Neural Networks
Francis Bach Going beyond least-squares – II : Self-concordant analysis for logistic regression
The Cauchy residue trick: spectral analysis made “easy”
On the Generalization Ability of Online Strongly Convex Programming Algorithms
Thomas Lumley visualises data pooling simply and well.