Archives
September 2025
Categories
All
|
Back to Blog
What scientific research can (and can't) tell us about how outdoor lighting and crime interact8/1/2025 Image credit: Wikimedia Commons 1863 words / 7-minute read Our world is one of generally complicated problems that lack simple solutions. Reality is often messy and confusing. Yet politics gravitates toward easy fixes and even promises them. Ill-conceived attempts to solve such problems can make things worse by generating new ones. Such is the case where the interaction of outdoor lighting and crime are concerned. We have written here in the past about that interaction. We pointed out how inconsistent the data are. And we explained why "feelings of safety" may have more to do with lighting and crime than any other underlying relationship. New research by Paul Marchant and Paul Norman published in Applied Spatial Analysis and Policy challenges the narrative again. They challenge the underpinnings of much of the scholarly literature on the subject. They also show in a well-researched case study how 'common sense' assumptions about lighting and crime may be flat wrong. This post is as much about how scientists do research as it is about the result of the specific study used as an example. Marchant and Norman examined data from the city of Leeds, UK, during a massive municipal street lighting retrofit. Between 2005 and 2013, Leeds replaced some 80,000 street lights over its administrative territory. The city exchanged its old sodium-vapor lighting for ceramic metal halide luminaires. First, in 2022 the pair looked into whether changing to white light street lamps improved road safety. They found "no convincing evidence ... for an improvement (or detriment) in road safety by relighting with white lamps, despite the extensive, city-wide installation efforts and associated costs." In other words, there was no evident effect one way or another. The study did not find evidence that the new lighting had affected traffic safety despite there being over 19,000 road traffic collisions in the data set. To secure UK Private Finance Initiative funding for the retrofit, the city touted expected benefits due to crime reduction. A key claim involved a 20% reduction in night-time crime. This, in turn, would lower social and economic opportunity costs associated with crime. And it would yield a Benefit to Cost Ratio (BCR) of 3.75 — an impressive return on investment. The retrofit cost was borne at least in part by this forecast. The Leeds case is one of the largest-scale municipal lighting retrofits studied rigorously to date. But the results show that its influence on crime was close to zero. "The upshot of all the fitted models," the authors wrote, "is that the effect of the new replacement white lamps on crime is small." We recently interviewed lead author Paul Marchant for more on his paper and its significance. His illuminating (!) answers below are lightly edited only for length and clarity only. DSC: Can you briefly summarize the method used in your paper for a lay audience? Marchant: The study analysed the weekly counts of crimes that were recorded by the police in each of the 107 geographical areas comprising the whole of the UK city of Leeds, as streetlighting was changed from orange to white light, over a period of nearly 9 years. The method uses the fact that the replacement new white lighting is introduced into each of the 107 areas in different amounts at different times. Cumulative numbers of new lamps operating in each week by Middle Layer Super Output Area in Leeds, UK, during the study period. Figure 2 in Marchant and Norman (2022). Therefore, each area is at a different stage of completion at any given time and so it is possible to assess what these differences in lighting between areas lead to in terms of differences in the occurrence of crime.
Different areas suffer different amounts of crime, with some experiencing a lot whilst others only a little. The multilevel analysis treats the 107 series as a ‘family’, so that the overall levels of crime can be modelled as coming from a statistical distribution with a certain average weekly crime rate and a certain spread in rates. Similarly other distributions are formed for aspects of the underlying area-specific trends that are either increasing or decreasing the amount of crime, due to causes other than the nature of the streetlighting. Crucially the modelling involved a term for the amount of new lighting each area had received at a given time point. This allows the effect of the new lighting to be separated from the underlying trends in crime in the different areas. Therefore, the effect on crime of the full complement of new lamps can be determined. DSC: At the same time it was important to you to show why earlier reports in the literature are often unreliable. You point to several factors, from 'untrustworthy controls' to unavailability of the source data. Do you think something nefarious is going on with respect to lighting and crime studies? Does it have something to do with who funds the research? Marchant: Frankly I don’t know, but it certainly needs to be guarded against. Transparency and openness are paramount for good science. There are concerns about such as conflict of interest bias even in the regulated clinical trials area, as outlined in the Cochrane Collaboration Handbook. Anyone interested in issues around the matter of lighting and public safety, and associated concerns of poor quality research, might like to read my article ‘Investigating whether a crime reduction measure works’. It explains how and why I started pursuing the issue. DSC: Given the many effects that complicate light/crime studies, one might assume that getting at underlying relationships is nearly impossible. Do you think anything like a dose-response relationship between light and crime will ever be demonstrated? Or is human behavior just too complex for that? Marchant: In principle it could be possible, as even very small effects can be found and estimated with enough of the right sort of data, sensible modelling and enough computing power to analyse it. Personally, I wonder whether some crime researchers talk of “dose-response” so as to (wrongly) give the impression that their work is just like that of a clinical trial. In a comprehensive study of lighting, one ought also take account of the characteristics of the places in which the different lighting is installed, to make its results more generalisable. One can conceive of a very elaborate study even involving randomisation but such would take a lot of resources! DSC: The shortcomings of research study design can lead to poor-quality flooding the literature. How much research of a doubtful nature do you think is out there that never gets scrutinized (yet eventually becomes part of the folklore)? Marchant: In this field, my answer is most of it! Indeed, researchers seem to have never heard of regression towards the mean. They wrongly assume the counts are Poisson [distributed] and this assumption can’t be checked as they only use ‘before and after studies’. I suspect that they may have only done basic statistics courses in which it is only statistically independent events that are analysed. I believe ‘folklore' is an appropriate term as bad practice is just passed on with no adequate questioning of whether, for example, the method is appropriate or that the assumptions underpinning the method are met. The problem is that it is very easy to put some data into a computer, press the button for an unwittingly inappropriate analysis, get some other numbers out and call that the result. Reviewers of journal articles are also likely to have also been inculcated into the same folklore of the field. DSC: The conventional wisdom about research is that one way to avoid at least some of these problems is to 'pre-register' study protocols. In this framework, scientists state their intended data collection, reduction and analysis procedures up front. Not only that, but they are further encouraged to publish these plans. This helps keep researchers honest, because any attempts to fit data to models after the fact will be more conspicuous. Yet comparatively few studies do this. Why do you think researchers avoid pre-registering their methodologies? Marchant: Registration wasn’t always the case for clinical trials and it has taken a while for the practise to diffuse into other areas of inquiry. The ‘replication crisis’ has been a source of great concern about poor research. I think there has been progress in experimental areas in which a planned intervention is made, but registration is less common in observational studies, which just collect observations, when there is no planned / organised intervention. However, things are gradually changing and some organisations, like the Center for Open Science, are encouraging and facilitating the practice. Some journals like PLOS ONE also publish plans for research, which also allows comments like this to be made. The protocol for our study was not publicly registered but was sent to three competent [and] suitably qualified academics. I am somewhat surprised in these cost-conscious times, that if a policy has been introduced on the basis of alleged evidence more checks are not made that the policy has achieved what it was intended to. I think that a big part of the problem is that the political class tends to be rather poor at statistical thinking. DSC: What do you think would make for a good way forward in this field of research? Marchant: To apply this multilevel method to datasets from other places, involving lighting changes, where there is appropriate data for ‘outcome measures’, such as crime or road traffic collisions. [And] get more statisticians involved. Many researchers use statistical analysis but this does not make them statisticians. Try to engage those who ideally have a higher degree in the discipline of statistics, are recognised by a professional body, such as the American Statistical Association and who undergo continuous professional development in the subject. Try to engage statistical epidemiologists as there are parallels between disease and crime in society. DSC: Is the Leeds example a sufficiently cautionary tale that officials in other cities might think twice before acting more on instinct rather than evidence? Marchant: Yes! Also, a good rule of thumb is not to believe that a salesperson is giving entirely unbiased information. It would be a step forward if policies were implemented in ways that would make an evaluation straight-forward and that post implementation performance is routinely checked by those who are statistically well-qualified and independent. DSC: If your work offered one important takeaway message for readers, what would it be? Marchant: Just because it’s a commonly held belief that lighting beats crime doesn’t mean that the claim shouldn’t be checked out. The recent Marchant and Norman study, which counters some of the shortcomings of other studies on the subject, suggests that any effect of changing to brighter white lighting, either detrimental or beneficial, was at most rather small, and not at all what was expected. There seems to be no reason to expect that the effect of such relighting would be any different elsewhere and so the anticipation of crime reduction is not a sound reason for increasing lighting. As we wait to find out where lighting and crime research is going next, this new work offers some appetizing food for thought. We thank Paul Marchant for sharing his insight with our readers.
0 Comments
Read More
|
RSS Feed