![]() Therefore, from the model quality perspective, temporal model degradation introduces a completely new challenge, which we would like to refer to as “ AI aging”. While much research has been done on various types and markers of temporal data drifts, there is no comprehensive study of how the models themselves can respond to these drifts. Known as “concept drift”, this evolution of data inevitably affects the quality of the models, to the point where the model may no longer correspond to its new reality 17, 18, 19. However, data-producing environments 14, 15, 16, often change with time, and their statistical properties change alongside them. Model dependency on time, on the contrary, has been virtually ignored in practical AI implementations: it is commonly assumed that once a model has been trained to achieve the required quality, it is ready to be deployed and used without further updates or retraining. As a result, it is becoming more and more common to apply these tools to newly trained models, to make sure they can correctly scale to their diverse environments 11, 12, 13. AI bias tests help identify and stop the propagation of data bias and “shortcuts” into ML models 9, 10. Transfer learning gives additional training to the model when the model is moved to a new environment 8. For example, federated learning allows models to be trained, validated, and shared between different sites with varying data patterns 7. Model dependency on the training data location has been studied to a great extent, and several approaches have already been proposed to minimize its impact on model quality. We can loosely divide this model-data dependency into two principal types: location-specific and time-specific. Trained from and driven by data, these models become inherently dependent on the data as it was at the time of training. The principal challenge in maintaining AI model quality stems from the very nature of the current ML models. As these practical uses multiply, the applied aspects of sustainable AI model quality become increasingly important, calling for more efforts to make AI implementations dependable and robust. Similar content being viewed by othersĪrtificial Intelligence (AI), and machine learning (ML) models in particular, are becoming increasingly present in many real-life applications, from finance 1, 2 and manufacturing 3 to agriculture 4 and healthcare 5, 6. ![]() Finally, we indicate potential causes of temporal degradation, and suggest approaches to detecting aging and reducing its impact. We also demonstrate the principal differences between temporal model degradation and related concepts that have been explored previously, such as data concept drift and continuous learning. Using datasets from four different industries (healthcare operations, transportation, finance, and weather) and four standard machine learning models, we identify and describe the main temporal degradation patterns. In this study, we present the first analysis of AI “aging”: the complex, multifaceted phenomenon of AI model quality degradation as more time passes since the last model training cycle. The principal challenge in this task stems from the very nature of current machine learning models, dependent on the data as it was at the time of training. As AI models continue to advance into many real-life applications, their ability to maintain reliable quality over time becomes increasingly important.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |