Quick notes after reading Time Series Anomaly Detection Using Convolutional Neural Networks and Transfer Learning:
- When I read about their idea of using U-Nets for anomaly detection, I thought well, that’s cool but where will you find all that labeled data. It was (pleasantly) surprising when they later mentioned that they generated synthetic data themselves, for all the types of time-series and anomalies they wanted to train the model on.
- Instead of plain change point (i.e. whether there was a short spike or dip) or change of trend detection, their U-Net based approach allows them to go deeper. They are able to do multi-class (i.e. multiple types of anomalies) and both single and multi label for sub-sequences of time-series. Impressive!
- The paper lacked details around input normalization (based on time-series scale) and data augmentation. They also talked about up-sampling and down-sampling the data to fit into 1024 length sequences their model expects but I didn’t understand how they do that.
- I couldn’t find their code on the internet. That’s a pity because that could have filled in the holes left in their text.