Essentially, these AIs solve all kinds of "how" problems for us. We now know how to play a good game of chess or go, how to reply to any text prompt, and how to create an image from just a text description. This leaves us with the "what" problem. What do we want? What do we value? This is a hard question. Can an AI answer it for us? No, I don't think so. At least not any of the AIs as we have them today. Remember, the fundamental function of all the AIs we've discussed is to take in data, and give us what we said we wanted. So it seems that there's some kind of bootstrapping step missing, in that we have to give the AI "what we want" as an input, but expect "what we REALLY want" as an output.

I've mentioned before that I work as a data scientist. I've been at it for a long while now. Long enough, in fact, that they interviewed me at my job about what it was like to be me, and published the interview internally as part of their "technical career storyline" feature. The following are the contents of that interview, reproduced with permission and some minor editing.

I do a lot of work in Jupyter Notebooks, and I often find it useful to post them in their entirety in a blog post. The following method is what I use. It requires no additional plugins or programs, and allows you to post the notebook as a non-interactive html element, as a Gutenberg block in your post.

Stanford's antibody study in Santa Clara County [...] reported a population prevalence of 2.5% to 4.2% for COVID-19 antibodies, and a corresponding infection fatality rate of 0.12% to 0.2%. This result, if true, would have huge implications, as the lower fatality rate would dramatically change the calculus on important policy decisions [...]. However, this study has also received numerous criticisms, most notably for the results being inconsistent with the false positive rate of the antibody test. Here, I attempt to derive what the results ought to have been, under a better methodology.

This was the state of the "Bayesian evaluation for the likelihood of Christ's resurrection" post, as of Easter 2020. This post will remain unchanged, while the linked post above will have further edits. I also put up a Facebook post on that date, which is essentially the same as the 2019 Facebook post. The offer […]

This was the state of the "Bayesian evaluation for the likelihood of Christ's resurrection" post, as of Easter 2019, in the "third draft" form. Some of the formatting has been lost in the blog migration, particularly in the Jupyter notebooks, but the content has been retained. This post will remain unchanged, while the other post […]

This was the state of the "Bayesian evaluation for the likelihood of Christ's resurrection" post, as of Easter 2018, in the "second draft" form. Some of the formatting has been lost in the blog migration, particularly in the Jupyter notebooks, but the content has been retained. This post will remain unchanged, while the other post […]

This is still a work in progress. It will change as I continue to add and edit the content. I consider this to be in its "third draft" form. It will take some more time to complete, and it may be messy in the meantime. A version of this post as it appeared on Easter […]

This is another Jupyter notebook. It contains python code that generates the probabilities of a "skeptic's distribution" generating a Jesus-level resurrection report. First, we import some modules: In [1]: import numpy as np import pandas as pd from scipy.stats import lognorm, genpareto We then write a function to simulate getting the maximum value out of n […]

This is a jupyter notebook. It contains the python code which generates the relationship between the number of "outliers" (as previously defined) and the probability of naturalistically generating a Jesus-level resurrection report. resurrection_calculation First, we import some modules: In [1]: %matplotlib inline import numpy as np import pandas as pd from scipy.stats import genpareto Next, we […]