I want to spend a few seconds to mention the spirit in which this cover letter is written. Upon drafting it, I wondered: how can I stand out, how to convince you that I am the perfect candidate? Soon, I realised that I pale in comparison with many candidates in terms of credentials.
Disheartened, I pulled myself together by remembering how I like to think of myself: a truth seeking, top-down thinking meta-optimiser. It is more important to me that the truly optimal candidate is selected, given the impact it will likely have on AI safety. Therefore, I decide to introduce myself in a purely objective manner.
In the past year, I came to define my life’s purpose as:
<aside> <img src="/icons/arrow-northwest_gray.svg" alt="/icons/arrow-northwest_gray.svg" width="40px" /> Contribute to my utmost ability towards an AI-enabled future where all humanity can flourish.
</aside>
In the past couple of months since I pivoted to AI safety, I’ve come to observe that the field as a whole lacks two critical elements: 1) a principled and holistic technical framework and 2) a meta-level epistemic strategy that would bridge methodologies from epistemology, project management and research strategy. I think these are not only necessary but also feasible. They would however require coordination across almost all aspects of the AI ecosystem and society more broadly.
From Shallow review of live agendas in alignment & safety:
The de facto agenda of the uncoordinated and only-partially paradigmatic field is process-based supervision / defence in depth / hodgepodge / endgame safety / Shlegeris v1. We will throw together a dozen things which work in sub-AGIs and hope: RLHF/DPO + mass activation patching + scoping models down + boxing + dubiously scalable oversight + myopic training + data curation + passable automated alignment research (proof assistants) + … We will also slow things down by creating a (hackable, itself slow OODA) safety culture. Who knows.
So far I’ve come to realise that my superpowers are:
My aptitudes and interests strongly align with developing and integrating the missing pieces mentioned above. This creates a profound sense of urgency within me to contribute.
With the above context in mind, the highest-impact career option for me (all things considered) would be that of a research lead (or member of an advisory board in a leading AI lab or government agency), focused on technical AI safety or AI governance/policy. I believe this or similar roles would best position me to effectively bridge boundaries between approaches, domains and organisations.
However, the corresponding career path traditionally involves a PhD or significant experience as a research engineer or AI policy consultant. Without neither of these, my default options are: