#ifndef __CONGESTION_MONITOR_H__
#define __CONGESTION_MONITOR_H__

#include "ntdtime.h"
#include "port.h"

/*
The congestion monitor will take 3 metrics into consideration, RTT, OWD, LOSS.

1. Using RTT has no clock skew problem, but the reverser path may add noise.
2. Using OWD has clock skew problem, clock skew correction mechanism needed.
3. For there're problems with delay-based needed, and some intermediate nodes do not buffer the packets 
   exceed the avail-bw, loss-based congestion detection is also needed.

The congestion monitor will keep a limited history of the metrics, and use the new records replace the
oldest, the base value will be kept and updated.

Let's focus on RTT first. The static methods can not used directly any more, we have to choose the 
right starting point to check if it's increasing trend of decreasing trend. But there maybe some other
cross traffic which already made the network congested, another problem is that there may be packet
loss when congested, and the RTT only fluctuate at  high level.

In another word, if we want to simply use increasing trend in latency as a hint of congestion, we can
not handle the before mentioned case that fluctuate at high level. Actually, I don't think trend detection
is that important in dynamic congestion detection, even it surely behave well in detecting avail-bw. It should
used in combination with other methods to yield better results.

The clock adjustment detection is not handled here for it's difficult when the network is congested, besides,
clock adjustment caused by ntpd when have only 6ms drift by default. If later test found that this is truly
a problem, we then have to cope with it.

The congestion detection is firstly take delay into consideration. Here the delay means queuing time in the 
intermediate nodes. The base_delay in a period will be stored for calculate the queuing delay, should be
notice here is that there're some challenges
1) route change, there won't cause any problem if OWD becomes less for we can update base_delay with
smaller value. But if OWD becomes larger, we are not sure whether it comes from congestion, if we don't
update the base delay with bigger value, then, we may make false negative conclusion that there's congestion
happening and slow our sending rate.

   The solution is simple now, we only keep a window of base_delay and choose the smallest one, if that time
   passed, the base_delay will updated with the new value, so there will be only a small period that we
   encounter downgrade, this is sustainable for route change is not occur frequently. 

   we can also use heuristic methods to identify route change, when detect the smoothed delay is unbearable,
   we first check to find pivot by comparing each i and i+1 pkt pair, and choose the largest difference pair,
   thus split the set into two parts, if the difference is large enough, then check if each part is show 
   increasing trend, if  



Now the congestion detection only gives three hints, not congested, congested, severely congested.

tcp stream and udp stream are treated differently. 

********************
Pseudo code

    record_metrics();
    
	remove_clock_skew();

	thresh_delay = min(a * loss_delay, acceptable_delay)

	if(pkt_loss){
	    check_loss_pattern()
	}

	//tcp and udp have different thresholds
	if ( udp ){
    
	    congest_idx = trip-time-based congestion detection();

		if( congest_idx > thresh ){
			// may be congestion but need to re-check

			//if we don't track the true base_delay for congestion, we can only fall back on loss-based
			//so here we just treat the base_delay as the true base_delay

			//Let's check if it's caused by route change
			//find pivot
			largest_diff = find_pivot();
			if( largest_diff > delay_diff_threshold && positive_change ){
				// go on check the second set
				if( route_changed() ){
					update_base_delay();
					return IsCongested(2ndstream);
				}
			}

			//normal way	

		}else{
		}
	}else{//tcp
	   ...
	}


*/

// Keep 50 samples, interval between samples will be 100ms, that's 5s
// Our congestion detection will based on the samples in the last 5 seconds
// the clock skew problem is neglectable in this condition
#define DETECT_INTERVAL 5000 // 5s
#define DETECT_DELAY_SIZE (DETECT_INTERVAL/PROBING_DELAY) //50

// We need to keep a track of the history list, and choose the minimum as the true base delay
// We do this to make the delay-based algorithm more robust to route change

// If the history is too short, our congestion control may not be that sensitive, and the base delay
// will be overestimate when congestion, if the history is too long, our congestion detection will 
// be too insensitive to the route change.

// Let's suppose 2 min is enough for our congestion control to make adjustments.
// During this period, Clock Skew should be taken care of, experiments suggest that a clock skew 
// of 1 ms per 10 seconds is possible, we should make clock compensation.

// Reset delay_base every 2 minutes. The clock
// skew is dealt with by observing the delay base in the other
// direction, and adjusting our own upwards if the opposite direction
// delay base keeps going down

// We will keep track of base delay every 20 seconds in 2min period
#define CLOCK_DRIFT_INTERVAL (20 * 1000) // clock drift of 2ms per 20 seconds, 2ms precision for delay
#define CC_THRESOLD (2 * 60 * 1000) // 2 min
#define DELAY_BASE_HISTORY (CC_THRESOLD/CLOCK_DRIFT_INTERVAL)

#define CUR_SAMPLE_SIZE (CLOCK_DRIFT_INTERVAL/PROBING_DELAY) //200
#define TREND_PERIOD (5*1000) //5s
#define TREND_SAMPLE_SIZE (TREND_PERIOD/PROBING_DELAY) //50

class MetricsHist
{
public:
	MetricsHist();
	void AddMetrics(NtdMetrics* pMetrics);

	bool GetDelayIncreasingTrend();

	uint32 GetLastDelay();
	uint32 GetLastLoss();

	uint32 GetLastLossRatioByCnt(int32 perCnt);
	uint32 GetLastLossRatioByPeriod(int32 period);

	std::vector<uint32> GetLossRatioByCnt(int32 perCnt);
	std::vector<uint32> GetLossRatioByPeriod(int32 period);//ms

	inline size_t GetLastIndex(size_t idx){
		return idx == 0?(CUR_SAMPLE_SIZE-1):(idx-1);
	}

	NtdMetrics hist[CUR_SAMPLE_SIZE];
	size_t cur_idx;
};

class DelayHist 
{
public: 
	DelayHist();
	void clear();
	void shift(const uint32 offset);
	void add_sample(uint32 sample);
    //uint32 get_value();
	uint32 get_last();
	//double get_congestion_idx();
	//bool get_increasing_trend();

	uint32 delay_base;

	//struct DELAY_SAMPLE{
	//	DELAY_SAMPLE()
	//	{
	//		tick = 0;
	//		delay = 0;
	//	}
	//	uint32 tick;
	//	uint32 delay;
	//};
	// this is the history of delay samples,
	// normalized by using the delay_base. These
	// values are always greater than 0 and measures
	// the queuing delay in ms
	uint32 cur_delay_hist[CUR_SAMPLE_SIZE];
	size_t cur_delay_idx;

	double delay_avg;
	double delay_var;
	double delay_avg_min;
	double delay_avg_max;
	double delay_var_min;
	double delay_var_max;

	// this is the history of delay_base. It's
	// a number that doesn't have an absolute meaning
	// only relative. It doesn't make sense to initialize
	// it to anything other than values relative to
	// what's been seen in the real world.
	uint32 delay_base_hist[DELAY_BASE_HISTORY];
	size_t delay_base_idx;
	// the time when we last stepped the delay_base_idx
	uint32 delay_base_time;

	bool delay_base_initialized;
};
class CongestMonitor;

//Only congested connections detected will be added into SharedBottleneckMonitor to group relating connections
//together, all CongestMonitors will be added into SharedBottleneckMonitor, and it will notify SharedBottleneckMonitor
//about the current status of the connections, and it's up to the SharedBottleneckMonitor to decide when and how
//to split those connections into groups.

class ICongestListener
{
public:
	enum CONGEST_LEVEL{
		NO_CONGEST,
		LIGHT_CONGEST,
		SEVERE_CONGEST
	};
	virtual void OnStatusChanged(const CongestMonitor* pCMonitor,
		                         uint32 delay, 
								 uint32 loss, 
								 ICongestListener::CONGEST_LEVEL level) = 0;
};

//TODO: do we need to buffer APP's data in QoS level? Let's focus more on congestion control in later design

class BandwidthAlloc
{
public:

};
//CongestControl will implement ICongestListener to react to congestions.
//When congestion detected, it will not react to it ASAP, actually it should wait for a while to gather
//informations on other connections and then make decision.
//It relies on SharedBottleneckMonitor to get the connections within a group.
class CongestControl : public ICongestListener
{
public:


};

//SharedBottleneckMonitor is a Singleton, all CongestMonitors which want to identify their groups should
//be added into SharedBottleneckMonitor.
//SharedBottleneckMonitor will also implement ICongestListener, which gives notification on the current 
//status changes of specific connection.
class SharedBottleneckMonitor : public ICongestListener, public SingletonModel<SharedBottleneckMonitor>
{
public:
	void AddCongestMonitor(CongestMonitor* pCMonitor);
	void RemoveCongestMonitor(CongestMonitor* pCMonitor);

	virtual void OnStatusChanged(const CongestMonitor* pCMonitor,
		uint32 delay, 
		uint32 loss, 
		ICongestListener::CONGEST_LEVEL level);

private:
	std::set<CongestMonitor *> pCMonitor_set_;
};

class DelayFilter
{
public:
	uint32 AddSample(const NtdMetrics* pMetrics);
private:
	DelayHist delay_hist_;
	DelayHist reverse_delay_hist_;
};

class CongestMonitor: public IConnMonitor
{
public:
	//The use of conv_id rather than pointer to the object is that we need to decouple 
	//the congestion detection with other modules, this makes the design more flexible
	//and we can even implement this logic in the QoS test tool
	CongestMonitor(uint16 conv_id);
	virtual ~CongestMonitor();
	virtual void AddSample(const NtdMetrics* pMetrics);

	void AddCongestListener(ICongestListener* pListener);
	void ReomveCongestListener(ICongestListener* pListener);

	void NotifyStatusChanged(uint32 delay, 
							 uint32 loss, 
							 ICongestListener::CONGEST_LEVEL level);

protected:
	DelayFilter delay_filter_;
	MetricsHist metrics_hist_;
	uint16 conv_id_;

	std::set<ICongestListener*> pCongListener_set_;
};

//Loss Pattern detection is responsible for detection whether the root cause of packet loss is due to 
//congestion, if so, we should take actions to control congestion.
//There are two cases we need to figure out:
//1. Loss caused by congestion
//2. Loss caused by random physical error, such as wireless link
//   (We can not slow our sending rate now for it will make things worse)
//When there's packet loss and we haven't make decision that it's congested yet, 
//We firstly check the delay(make sure have enough samples), if show increasing trend -> congested
//If not, we're not sure, but still can be. (fluctuate when congested).
//we enter into LossPatternDetection process:
//1. Assume congested
//2. Slow down the sending rate (step by step, half or percentage)
//3. Verify our assumption, if sending rate has correlation with sending rate, it should be caused by congestion.

//TODO: For this part of logic is relating to control logic, so leave it later.
class LossPatternDetection
{
public:
	LossPatternDetection(MetricsHist& hist);
	virtual ~LossPatternDetection();

	//If loss is detected, we have to try first to see whether there's increasing trend in delay
	//If yes, we can declare there's congestion on the link
	//Otherwise, we have to watch for a while to filter out some noise
	//1. if the loss or delay is unbearable, the link may be already congested, we can add them to 
	//bottleneck monitor to see if any correlated link, if so we can adjust bandwidth between them,
	//and verify the adjustment later. If no correlation found, we can try compete for the bandwidth,
	//and we can watch on the correlation between loss and sending rate
	//2. if the loss or delay is bearable, we can do nothing.
	void CheckCongestion();

	void SwitchToCompeteMode();

private:
	MetricsHist& hist_;
	enum STATUS{
		INIT,
		BOTTLENECK_DETECT,
		BANDWIDTH_COMPETE,
	};
	STATUS status_;
};

class UDPCongestMonitor: public CongestMonitor
{
public:
	UDPCongestMonitor(uint16 conv_id);
	//Record loss history
	virtual void AddSample(const NtdMetrics* pMetrics);

private:
	LossPatternDetection loss_detector_;
};

class TCPCongestMonitor: public CongestMonitor
{
public:
	//TCPCongestMonitor differs with UDPCongestMonitor on the process of latency.
	TCPCongestMonitor(uint16 conv_id);
};

#endif//__CONGESTION_MONITOR_H__