---
layout: post
title: Plans for ML experiments
date: '2015-04-30T01:53:00.002-07:00'
author: Alex
tags:
- Machine Learning
- Experiments
modified_time: '2015-04-30T01:53:29.311-07:00'
blogger_id: tag:blogger.com,1999:blog-307916792578626510.post-6501527875811046149
blogger_orig_url: http://brilliantlywrong.blogspot.com/2015/04/plans-for-ml-experiments.html
---

<div>
    <p>
        Well, since I defended my PhD thesis, I now have more spare time to
    write something to my blog.
    </p>
    <p>
        First, I'd better fix the thoughts about things I want to experiment with in the nearest time
    </p>

    <br/>
    <ul>
        <li>probabilistic gradient boosting, firstly those are modifications of AdaBoost, which model not predictions
            but distributions for particular events<br/></li>
        <li>separate experiments for Oblivious DT probabilistic pruning.<br/></li>
        <li>pruning + boosting over bagging + probabilistic GB + optimal selection of values for leaves for RankBoost
            with proper loss function. I expect this mixture of algorithms and techniques to provide very efficient and
            reliable method of ranking. At this moment, pruning works fine for classification purposes.<br/></li>
        <li>rotation-invariant Conformal FT-like neural networks. Seems that I resolved main issues with final formula,
            but there are still some problems with pretraining, since I don't use binary hidden variables. PCA is the
            strongest candidate on pretraining at the moment.<br/></li>
        <li>finally, after Avazu CTR contest I came to strong opinion that good results in
            this area may be achieved only after finding good vector representations for categories (like word2vec for
            words). This maybe a bit tricky, since I have only ideas about heuristics that may appear useful, not some
            optimization approach.
        </li>
    </ul>
</div>