An argument from analogy is drawing a conclusion by finding a similar situation in which the same conclusion would be drawn. While these types of arguments are not always valid, they are the primary means by which humans can learn about the world. They are also a basic building block for intelligence in general - both natural and artificial.
Typically, the argument from analogy applies to something in the physical world. For example:
1) Last time I ate at this restaurant, they were slow. Therefore, they are likely to be slow this time as well.
2) I ran a vaccine experiment on 1000 subjects and they show a statistically significant drop in being affected by the virus. Therefore, this vaccine will be effective in the general population just like in this test population.
3) I know how this movie is going to end, because I’ve seen many similar movies with a similar plot that end in the same way.
These arguments apply to the physical world - they are not mathematical or exact logical statements.
If we adopt the regularity assumption, that patterns exist in the universe, then analogical thinking will work more often than one would expect on random chance. Unfortunately, there is no surefire way to tell which analogies will work and which will not. There are multiple strategies for figuring out which analogies work:
1) Trial and Error
2) Run a controlled experiment, so that the analogy of the population being tested to the general population is as close as possible
3) Find analogies OF analogies to try to anticipate whether or not they are likely to work.
4) Increase the number of “similar situations” (gathering training data in machine learning) in order to understand the robustness of the analogy.
Machine Learning algorithms and Bayesian Inference both rely on arguments from analogy in order for them to work. One algorithm that works explicitly based on the argument by analogy is the nearest neighbor algorithm - with a discussion in episode 57 of The Local Maximum. However, all algorithms use arguments from analogy on some level because an assumption is made that datasets contain patterns and that these patterns can be generalized to prove intelligence in other (analogous) situations.