Experimental Uncertainty Principle
Most of us are familiar with the mantra of how science progresses:
A hypothesis can never be completely proved by any finite set of experiments but it can be falsified by a single result.
In mathematical proofs, clear cut algorithms can usually be applied to prove unequivocally the falsehood of a theorem (notwithstanding Godel's incompleteness theorems :)
But in real research in the physical sciences, that is not exactly how scientists process reports of experimental results. And an important reason is the way results are reported.
Lets pick an example from the Open Access Beilstein Journal of Organic Chemistry.
Here is the full description of the experiment from the supplementary materials page:
To a solution of 5a (196 mg, 0.433 mmol) in CH2Cl2 (1.8 mL) was added p-toluenesulfonic acid (19 mg, 0.11 mmol). After stirring for 0.5 hours at 0 oC, the mixture was concentrated under reduced pressure and purified by flash chromatography on silica gel (eluent: ethyl acetate: P. E. = 1: 3) to provide 7 (152 mg, 100%) as a colorless oil.
I have omitted the characterization information. Lets assume for the moment that it is completely correct.
The question is : if 10 chemists follow this procedure as described, will they get 100% yield of pure product?
I think that it is quite possible that the results will vary wildly, including many complete failures. Here is why:
1) The reaction is carried out at 0 deg C for 30 minutes but the conditions of the work-up are completely unspecified. We don't know the pressure, the temperature of the bath or the duration of the solvent evaporation. The temperature of the rotovap bath will vary wildly from lab to lab, depending on vacuum pressure and personal preference. This is key because the conditions of the work-up (warmer and more concentrated) are much harsher than the reported reaction condition. My guess is that when this gets indexed in a database the reaction conditions will be further stripped of detail and likely end up as 0 C, 30 min.
2) The chromatography step does not specify how much silica to use, the dimensions of the column, the number of fractions, the TLC images of the fractions, the amount of solvent used to load the reaction mixture, etc. It may even be the case that the ratio of solvents was changed over the course of the chromatography - in a situation like this some would use a good solvent like methylene chloride to load the mixture then chase it with a solvent mixture containing a lower ethyl acetate/petroleum ether ratio.
3) A 100% isolated yield after chromatography means that not a single milligram was lost during transfer to the column and that all fractions containing the product were very pure. Ethyl acetate is notorious for increasing apparent product yields because it is sometimes difficult to remove on the vacuum pump. I would like to see the NMRs of the fractions.
This last point also brings up the issue of what the researcher does when confronted with an apparent 101% yield - since this is not chemically plausible it cannot be reported as such. Does the researcher state an assumption that there is a bit of extra solvent and slice off a milligram in the report? We can't tell from the information given in journals.
I want to make it clear that I am not picking on the authors for reporting in this way. Within the current norms of the organic chemistry community, this is an acceptable way to report laboratory procedures in peer reviewed journals.
Of course all (or most) of these details should have been recorded in the laboratory notebook. I understand that initially protocols in papers were abbreviated to save on space. But now with unlimited online supplementary materials associated with papers, researchers could scan their notebooks and all associated documents. But that is not required by the chemistry journals that I know and I have not seen it done.
Keep in mind that this is not new work - researchers already have (or should have) all of this as a routine part of doing research. This is one big advantage of Open Notebook Science - very little extra effort required. (Cameron Neylon also has a very nice recent summary of his thoughts on this.)
Any chemist will tell you (if they are honest) that there is almost always a mistake, however small in every experiment. By everyone agreeing to report experiments in a highly abbreviated form, it makes it convenient to get done more quickly and get that all important paper out the door. Do you completely start over an experiment because you measure 101% apparent yield? Or do you realize that you just don't have time and take a "shortcut" of some type to get that paper out.
All of this would go away if we came clean about our experiments - the good, bad and the ugly. Lets stop pretending that we did the reaction EXACTLY as stated in published abbreviated protocol and we might start to get out of this quagmire.
We don't have to change the way we abbreviate experiments - just link to the relevant pages in the laboratory notebook in the supplementary sections of papers.
As chemists try to make sense of the physical world and process results from other researchers, they have to evaluate the meaning of experiments published like this. Instead of processing the information algorithmically, they apply fuzzy logic: more weight is given to results with more proof.
With the limited information provided in this particular experimental description, I would expect that it is possible to get this reaction to work in good yield but I would not question the fundamental laws of nature if some chemists report that it fails completely. If I had access to the laboratory notebook and all raw data, including how the reaction was monitored, I would weigh the evidence of each report quite differently.
The more information one has about an experiment the more confidence one can place in the results. But it would never be possible to have complete confidence in any result, no matter how much information is provided. And because providing more information costs more in terms of time and money, a balance has to be struck.
We might call this the experimental uncertainty principle:
All experimental results are uncertain to some degree. Uncertainty can be reduced with more information but then fewer experiments can be performed with the same resources.
For example, an experiment like EXP064 provides extensive links to monitoring runs after each step in the reaction and provides evidence of the purity of the starting materials. By contrast EXP134 records 4 parallel reactions with only photographs as results. The purpose of the first experiment was to understand the Ugi reaction, while the second aims to quickly identify Ugi reagents that lead to easily purified products. When these reactions get compared, the second carries far less weight than the first - but we only know that by looking at the details in the notebook.
If we expect autonomous agents to contribute to the process of doing science (for example formulating and testing hypotheses), information has to be tagged in such a way that it incorporates a measure of uncertainty.
I suspect that it will be easier in many cases (like organic chemistry) to simply redo the experiment under known conditions rather than attempt to get hold of the original notebook.