卷积的三种模式
通常用外部api进行卷积的时候,会面临mode选择
其实这三种不同模式是对卷积核移动范围的不同限制
设 image的大小是7x7,filter的大小是3x3
1.引用计数
引用计数法的原理是: 每个对象维护一个ob_ref字段,用来记录该对象当前被引用的次数,每当新的引用指向该对象时,它的引用计数ob_ref加1,每当该对象的引用失效时计数ob_ref减1,一旦对象的引用计数为0,该对象立即被回收,对象占用的内存空间将被释放。
缺点: 无法解决循环引用
2.标记清除
Python采用了“标记-清除”(Mark and Sweep)算法,解决容器对象可能产生的循环引用问题。
标记阶段 遍历所有的对象,如果是可达的(reachable),也就是还有对象引用它,那么就标记该对象为可达;
清除阶段 再次遍历对象,如果发现某个对象没有标记为可达,则就将其回收。
优点: 解决了循环引用问题
缺点: 标记清除算法在执行很多次数后,程序的堆空间会产生一些小的内存碎片。
3.分代回收(假设新生代、中生代和老生代的threshold分别为700、10、10.)
· 每新增 701 个需要 GC 的对象,触发一次新生代 GC
· 每执行 11 次新生代 GC ,触发一次中生代 GC
· 每执行 11 次中生代 GC ,触发一次老生代 GC (老生代 GC 还受其他策略影响,频率更低)
· 执行某个生代 GC 前,年轻生代对象链表也移入该代,一起 GC
· 一个对象创建后,随着时间推移将被逐步移入老生代,回收频率逐渐降低
1.Predicting a person
2.Predicting an activity
3.Predicting a person as well as the activity
This paper shows that histograms of brain signals can be a very useful representation for data mining
activities. One of the primary advantages of the histograms is that they reduce the variable length
of signals to fixed length representations.
combining activities predicting/emotion recognition to a system.
Wiechert, Glavin & Triff, Matt & Liu, Zhixing & Yin, Zhicheng & Zhao, Shuai & Zhong, Ziyun & Zhaou, Runxing & Lingras, Pawan. (2016). Identifying users and activities with cognitive signal processing from a wearable headband. 129-136. 10.1109/ICCI-CC.2016.7862025.
use a 1D-CNN LSTM network architecture for machining tools RUL prediction
Traditional machine learning algorithms are sometimes difficult to extract hidden information that characterizes the degradation process of the tool.
deep learning methods tend to have better effects, as it has powerful adaptive learning and anti-noise ability, and it can automatically extract deep
features, which is more versatile than traditional machine learning methods.
CNN has a its capacity to automatically extract features and LSTM can effectively mine the hidden information in time series.
In fact, we can combine CNN’s high-dimensional feature extraction capacity and LSTM’s advantage on time series problems. After CNN extracts
features, we input them into the LSTM for training, then some improvements in accuracy and speed can be achieved.
For time-series problems, one dimensional convolutional neural network (1D-CNN) is more suitable than common convolution neural network. One of the
characteristics of the 1D-CNN is that for time-series data, the receptive field moves only in the direction of time, so the local inter-variable correlation can be extracted.
CNN layers detect better the spatial component of the data selecting the best features for us and RNN detect better the temporal component of the data.
(CNN layer is used to extract the most relevant features from the brain waves and LSTM is used to classify the time series.)