#ifndef __WEBSOCKET_H
#define __WEBSOCKET_H

#include <libwebsockets.h>
#include <string.h>
#include <stdio.h>
#include <pthread.h>
#include "message.h"
#include "chatclient.h"
#include "biglist.h"
#include "datablock.h"
#include "user.h"
#include "chatroom.h"
#include "task.h"
#include "tasks.h"
#include "concurrent_queue.h"
#include "simplechatgame.h"

#define EXAMPLE_RX_BUFFER_BYTES (30)
#define EXAMPLE_RX_CHATROOM_BYTES 1024

struct payload
{
	unsigned char data[LWS_SEND_BUFFER_PRE_PADDING + EXAMPLE_RX_BUFFER_BYTES + LWS_SEND_BUFFER_POST_PADDING];
	size_t len;
};

class websocket
{
	public:
	websocket();
	~websocket();
	biglist<user *> users;
	biglist<chatroom *> chatrooms;
	biglist<chatclient *> chatclients;
	biglist<simplechatgame *> simplechatgames;
	int64_t next_chatclientid;
	int64_t next_chatroomid;
	volatile bool shutdown;
	datablock *server_password;
	tasks *chatroom_tasks;
	pthread_t task_thread;
	bool run_async;
	biglist_item<simplechatgame *> *find_simple_game(datastring gameid);
	biglist_item<simplechatgame *> *add_simple_game(simplechatgame *game);
	void remove_client_from_simple_games(chatclient *client,bool senduserlistmessage);
	static void *task_thread_routine(void *arg); // Routine that is called from a background thread if run_async is true.
	// The following three functions are libwebsockets callback functions.
	static int callback_http( struct lws *wsi, enum lws_callback_reasons reason, void *user, void *in, size_t len );
	static int callback_chatroom( struct lws *wsi, enum lws_callback_reasons reason, void *user, void *in, size_t len );
	static int callback_example( struct lws *wsi, enum lws_callback_reasons reason, void *user, void *in, size_t len );
};

extern websocket *the_websocket; // One global variable. Shhhh. Don't tell anyone.

enum protocols
{
	PROTOCOL_HTTP = 0,
	PROTOCOL_EXAMPLE,
	PROTOCOL_COUNT
};

static struct lws_protocols protocols[] =
{
	/* The first protocol must always be the HTTP handler */
	{
		"http-only",   /* name */
		websocket::callback_http, /* callback */
		0,             /* No per session data. */
		0,             /* max frame size / rx buffer */
	},
	{
		"example-protocol",
		websocket::callback_example,
		0, // size of client block.
		EXAMPLE_RX_BUFFER_BYTES,
	},
	{
		"chatroom-protocol",
		websocket::callback_chatroom,
		sizeof(chatclient),
		EXAMPLE_RX_CHATROOM_BYTES,
	},
	{ NULL, NULL, 0, 0 } /* terminator */
};

#endif

/*
Talking about master, basically, yes, but a situation where more than one thread is trying to 
set a writable callback on the same wsi at the same time will blow up, because there is no 
locking inside lws to protect against that.

Master includes a lot of docs about this now. From READMEs/README.coding.md:

Libwebsockets works in a serialized event loop, in a single thread. It supports not only the 
default poll() backend, but libuv, libev, and libevent event loop libraries that also take this 
locking-free, nonblocking event loop approach that is not threadsafe. There are several advantages 
to this technique, but one disadvantage, it doesn't integrate easily if there are multiple threads 
that want to use libwebsockets.

However integration to multithreaded apps is possible if you follow some guidelines.

Aside from two APIs, directly calling lws apis from other threads is not allowed.

If you want to keep a list of live wsi, you need to use lifecycle callbacks on the protocol in the 
service thread to manage the list, with your own locking. Typically you use an ESTABLISHED callback 
to add ws wsi to your list and a CLOSED callback to remove them.

LWS regulates your write activity by being able to let you know when you may write more on a connection. 
That reflects the reality that you cannot succeed to send data to a peer that has no room for it, so 
you should not generate or buffer write data until you know the peer connection can take more.

Other libraries pretend that the guy doing the writing is the boss who decides what happens, and absorb 
as much as you want to write to local buffering. That does not scale to a lot of connections, because 
it will exhaust your memory and waste time copying data around in memory needlessly.

The truth is the receiver, along with the network between you, is the boss who decides what will happen. 
If he stops accepting data, no data will move. LWS is designed to reflect that.

If you have something to send, you call lws_callback_on_writable() on the connection, and when it is 
writeable, you will get a LWS_CALLBACK_SERVER_WRITEABLE callback, where you should generate the data 
to send and send it with lws_write().

You cannot send data using lws_write() outside of the WRITEABLE callback.

For multithreaded apps, this corresponds to a need to be able to provoke the lws_callback_on_writable() 
action and to wake the service thread from its event loop wait (sleeping in poll() or epoll() or whatever). 
The rules above mean directly sending data on the connection from another thread is out of the question.
Therefore the two apis mentioned above that may be used from another thread are

For LWS using the default poll() event loop, lws_callback_on_writable()

For LWS using libuv/libev/libevent event loop, lws_cancel_service()
If you are using the default poll() event loop, one "foreign thread" at a time may call lws_callback_on_writable() 
directly for a wsi. You need to use your own locking around that to serialize multiple thread access to it.

If you implement LWS_CALLBACK_GET_THREAD_ID in protocols[0], then LWS will detect when it has been called 
from a foreign thread and automatically use lws_cancel_service() to additionally wake the service loop from its wait.

For libuv/libev/libevent event loop, they cannot handle being called from other threads. So there is a slightly 
different scheme, you may call lws_cancel_service() to force the event loop to end immediately. This then 
broadcasts a callback (in the service thread context) LWS_CALLBACK_EVENT_WAIT_CANCELLED, to all protocols 
on all vhosts, where you can perform your own locking and walk a list of wsi that need lws_callback_on_writable() 
calling on them.

lws_cancel_service() is very cheap to call.

The obverse of this truism about the receiver being the boss is the case where we are receiving. If we get into a 
situation we actually can't usefully receive any more, perhaps because we are passing the data on and the guy we 
want to send to can't receive any more, then we should "turn off RX" by using the RX flow control 
API, lws_rx_flow_control(wsi, 0). When something happens where we can accept more RX, (eg, we learn our onward 
connection is writeable) we can call it again to re-enable it on the incoming wsi.
LWS stops calling back about RX immediately you use flow control to disable RX, it buffers the data internally 
if necessary. So you will only see RX when you can handle it. When flow control is disabled, LWS stops taking 
new data in... this makes the situation known to the sender by TCP "backpressure", the tx window fills and the 
sender finds he cannot write any more to the connection.

See the mirror protocol implementations for example code.

If you need to service other socket or file descriptors as well as the websocket ones, you can combine them 
together with the websocket ones in one poll loop, see "External Polling Loop support" below, and still do 
it all in one thread / process context. If the need is less architectural, you can also create RAW mode client 
and serving sockets; this is how the lws plugin for the ssh server works.
*/