Original article by Bernard Marr for Forbes.com
When we visit our doctor or go into hospital, we have faith in the knowledge that the healthcare professionals involved are treating us according to proven scientific methods, otherwise known as evidence-based medicine (EBM). This means they’re prescribing drugs or selecting treatment methods that have proven successful in clinical research.
Although the term ‘evidence-based medicine’ only dates back to the early 1990s, the concept itself is much older. Controlled trials were routinely being conducted as early as the 1940s, and clinical knowledge and expertise was already being disseminated in medical journals and textbooks long before that. (In fact, the oldest medical journal still running today, The New England Journal of Medicine, was founded in 1812. Even older, the first official clinical trial was conducted in 1747, into the treatment of scurvy in sailors.)
Clinical trials and studies are all about conducting research into disease and conditions, and the various treatment methods that may ease symptoms or eradicate the illness altogether—they explore which treatments work best for which illnesses and in which groups of patient. All around the world, EBM is the established standard for the provision of healthcare. But, in the age of big data, that might be about to change.
Clinical trials work by testing new treatments in small groups at first, looking at how well the treatment works and to identify any side effects. If a trial proves promising, it is expanded to include larger groups of people. Often the trial will include comparing the new treatment to other treatments by separating patients into different groups, each trialing a different treatment. This is usually done by a process called randomization, where patients are assigned to the various groups randomly.