
lime then takes this assumption to its natural conclusion by asserting that it is possible to fit a simple model around a single observation that will mimic how the global model behaves at that locality. It is not difficult to convince yourself that this is generally sound - you usually expect two very similar observations to behave predictably even in a complex model. Are you really sure you have no interest in understanding the system? You probably should begin to be interested in understanding the system! One of the best ways to improve models is to infuse them with domain knowledge and domain knowledge is what comes out of understanding the system.īehind the workings of lime lies the (big) assumption that every complex model is linear on a local scale.You might not have made the model you use and do not have access to how it was trained, thus you need to poke to it in order to understand how it behaves.There are many ways this can happen and there are many stories of how it can lead to real implications for affected people, such as in the case of racist bias in algorithms performing risk assessments on defendants or models claiming to be able to predict sexual orientation from portraits. Your setup might be flawed resulting in a model that performs well on your test and training data but in reality is a poor reflection of your problem.Next year all EU countries must adhere to a new regulation requiring algorithmic decisions to be explainable to the citizens directly affected. Laws and regulations are cropping up to restrict the blind use of complex models.In order for doctors to follow the predictions they must have knowledge of which factors weighted in. A prime example is a system predicting medical conditions requiring drastic surgery. The end users of your prediction might have much less confidence in the infallibility of your algorithm and if you cannot explain how your model reaches a prediction they are likely to ignore your predictions.


While you might convince yourself that you simply don’t care that much about the mechanics of the system, you’re simply interested in high quality predictions, this sentiment simply doesn’t cut it for several reasons: While the new crop of algorithms performs impressively in the prediction department, their black-box nature presents a real problem that has been swept under the carpet for long, as more and more accurate algorithms has been pursued. Simply put, in order to explain more and more complex phenomena, the models must become increasingly complex. The recent surge in use of complex machine learning algorithms has shifted the expected need of a model towards the prediction side, as the models are often too complex to understand. Inspections and understanding of the models themselves were a way to understand and make sense of the system and thus a goal in itself. For many years models were often created just as much for understanding a system as for predicting new observations. The R lime implementation has been a joint effort between myself and Michaël Benesty, with Michaël handling the text prediction parts and me taking care of tabular data predictions.īefore going into the how of using lime, let’s talk a bit about the why. This a transient situation though, as it will get added at a later stage. The most notable difference from a user point-of-view is the lack of support for explaining image predictions in the R version. The authors of the paper have developed a Python implementation and while the two implementations share many overlaps, the R package is not a direct port of the Python code, rather it’s an implementation idiomatic to R, playing on the strength of the language. lime is an implementation of the model prediction explanation framework described in “Why Should I Trust You?”: Explaining the Predictions of Any Classifier paper.

I’m very pleased to announce that lime has been released on CRAN. R machine learning lime prediction modelling
#Permute data in r archive#
Data Imaginist ◀︎Home Archive Art NFT About SubscribeĪnnouncing lime - Explaining the predictions of black-box models.Announcing lime - Explaining the predictions of black-box models
