Data Science at Home

Francesco Gadaleta
undefined
Jun 23, 2019 • 12min

Episode 65: AI knows biology. Or does it?

The successes of deep learning for text analytics, also introduced in a recent post about sentiment analysis and published here are undeniable. Many other tasks in NLP have also benefitted from the superiority of deep learning methods over more traditional approaches. Such extraordinary results have also been possible due to the neural network approach to learn meaningful character and word embeddings, that is the representation space in which semantically similar objects are mapped to nearby vectors. All this is strictly related to a field one might initially find disconnected or off-topic: biology.   Don't forget to subscribe to our Newsletter at amethix.com and get the latest updates in AI and machine learning. We do not spam. Promise!   References [1] Rives A., et al., “Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences”, biorxiv, doi: https://doi.org/10.1101/622803 [2] Vaswani A., et al., “Attention is all you need”, Advances in neural information processing systems, pp. 5998–6008, 2017. [3] Bahdanau D., et al., “Neural machine translation by jointly learning to align and translate”, arXiv, http://arxiv.org/abs/1409.0473.
undefined
Jun 14, 2019 • 13min

Episode 64: Get the best shot at NLP sentiment analysis

The rapid diffusion of social media like Facebook and Twitter, and the massive use of different types of forums like Reddit, Quora, etc., is producing an impressive amount of text data every day.  There is one specific activity that many business owners have been contemplating over the last five years, that is identifying the social sentiment of their brand, by analysing the conversations of their users. In this episode I explain how one can get the best shot at classifying sentences with deep learning and word embedding.     Additional material Schematic representation of how to learn a word embedding matrix E by training a neural network that, given the previous M words, predicts the next word in a sentence.        Word2Vec example source code https://gist.github.com/rlangone/ded90673f65e932fd14ae53a26e89eee#file-word2vec_example-py     References [1] Mikolov, T. et al., "Distributed Representations of Words and Phrases and their Compositionality", Advances in Neural Information Processing Systems 26, pages 3111-3119, 2013. [2] The Best Embedding Method for Sentiment Classification, https://medium.com/@bramblexu/blog-md-34c5d082a8c5 [3] The state of sentiment analysis: word, sub-word and character embedding  https://amethix.com/state-of-sentiment-analysis-embedding/  
undefined
Jun 4, 2019 • 21min

Episode 63: Financial time series and machine learning

In this episode I speak to Alexandr Honchar, data scientist and owner of blog https://medium.com/@alexrachnog Alexandr has written very interesting posts about time series analysis for financial data. His blog is in my personal list of best tutorial blogs.  We discuss about financial time series and machine learning, what makes predicting the price of stocks a very challenging task and why machine learning might not be enough. As usual, I ask Alexandr how he sees machine learning in the next 10 years. His answer - in my opinion quite futuristic - makes perfect sense.  You can contact Alexandr on Twitter https://twitter.com/AlexRachnog Facebook https://www.facebook.com/rachnog Medium https://medium.com/@alexrachnog   Enjoy the show!  
undefined
May 28, 2019 • 42min

Episode 62: AI and the future of banking with Chris Skinner

In this episode I have a wonderful conversation with Chris Skinner. Chris and I recently got in touch at The banking scene 2019, fintech conference recently held in Brussels. During that conference he talked as a real trouble maker - that’s how he defines himself - saying that “People are not educated with loans, credit, money” and that “Banks are failing at digital”. After I got my hands on his last book Digital Human, I invited him to the show to ask him a few questions about innovation, regulation and technology in finance.
undefined
May 21, 2019 • 22min

Episode 61: The 4 best use cases of entropy in machine learning

It all starts from physics. The entropy of an isolated system never decreases… Everyone at school, at some point of his life, learned this in his physics class. What does this have to do with machine learning? To find out, listen to the show.   References Entropy in machine learning  https://amethix.com/entropy-in-machine-learning/
undefined
May 16, 2019 • 40min

Episode 60: Predicting your mouse click (and a crash course in deeplearning)

Deep learning is the future. Get a crash course on deep learning. Now! In this episode I speak to Oliver Zeigermann, author of Deep Learning Crash Course published by Manning Publications at https://www.manning.com/livevideo/deep-learning-crash-course Oliver (Twitter: @DJCordhose) is a veteran of neural networks and machine learning. In addition to the course - that teaches you concepts from prototype to production - he's working on a really cool project that predicts something people do every day... clicking their mouse.  If you use promo code poddatascienceathome19 you get a 40% discount for all products on the Manning platform Enjoy the show!   References:   Deep Learning Crash Course (Manning Publications) https://www.manning.com/livevideo/deep-learning-crash-course?a_aid=djcordhose&a_bid=e8e77cbf   Companion notebooks for the code samples of the video course "Deep Learning Crash Course" https://github.com/DJCordhose/deep-learning-crash-course-notebooks/blob/master/README.md   Next-button-to-click predictor source code https://github.com/DJCordhose/ux-by-tfjs  
undefined
May 7, 2019 • 24min

Episode 59: How to fool a smart camera with deep learning

In this episode I met three crazy researchers from KULeuven (Belgium) who found a method to fool surveillance cameras and stay hidden just by holding a special t-shirt.  We discussed about the technique they used and some consequences of their findings. They published their paper on Arxiv and made their source code available at https://gitlab.com/EAVISE/adversarial-yolo Enjoy the show!   References Fooling automated surveillance cameras: adversarial patches to attack person detection  Simen Thys, Wiebe Van Ranst, Toon Goedemé   Eavise Research Group KULeuven (Belgium) https://iiw.kuleuven.be/onderzoek/eavise
undefined
Apr 30, 2019 • 20min

Episode 58: There is physics in deep learning!

There is a connection between gradient descent based optimizers and the dynamics of damped harmonic oscillators. What does that mean? We now have a better theory for optimization algorithms. In this episode I explain how all this works. All the formulas I mention in the episode can be found in the post The physics of optimization algorithms Enjoy the show.  
undefined
Apr 23, 2019 • 16min

Episode 57: Neural networks with infinite layers

How are differential equations related to neural networks? What are the benefits of re-thinking neural network as a differential equation engine? In this episode we explain all this and we provide some material that is worth learning. Enjoy the show!   Residual Block     References [1] K. He, et al., “Deep Residual Learning for Image Recognition”, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770-778, 2016 [2] S. Hochreiter, et al., “Long short-term memory”, Neural Computation 9(8), pages 1735-1780, 1997. [3] Q. Liao, et al.,”Bridging the gaps between residual learning, recurrent neural networks and visual cortex”, arXiv preprint, arXiv:1604.03640, 2016. [4] Y. Lu, et al., “Beyond Finite Layer Neural Networks: Bridging Deep Architectures and Numerical Differential Equation”, Proceedings of the 35th International Conference on Machine Learning (ICML), Stockholm, Sweden, 2018. [5] T. Q. Chen, et al., ” Neural Ordinary Differential Equations”, Advances in Neural Information Processing Systems 31, pages 6571-6583}, 2018
undefined
Apr 16, 2019 • 17min

Episode 56: The graph network

Since the beginning of AI in the 1950s and until the 1980s, symbolic AI approaches have dominated the field. These approaches, also known as expert systems, used mathematical symbols to represent objects and the relationship between them, in order to depict the extensive knowledge bases built by humans. The opposite of the symbolic AI paradigm is named connectionism, which is behind the machine learning approaches of today

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app