
	These are considered optimizations since they require specific Executor features.

	Batching-based optimizations:
		L1 (DONE)
			- Pick out batchable (singular) tasks and submitting them to a batching capable Executor.
		L2 (DONE)
			Same as L1 and:
			- Batch more tasks, up to a fixed limit, based upon the amount of CPU threads.
	
	Scheduling/General Optimizations:
		L1 (WIP)
			- Submit HIGH_PRIORITY and LOW_LATENCY tasks directly to the Executor. (DONE)
			- Split very-large async submissions into smaller sets for better batching. (DONE)
				This also acts as a bug fix :)
			- Improvements regarding async submissions and shuffling [WIP]
				Proposal 1: Use a very fast random number generator. (DONE)
				Proposal 2: Don't shuffle at all.
				Proposal 3: Only shuffle if a prioritization flag is NOT present.
			- STANDARD ONLY: Directly submit to the executor if EEX_executor_sched_accepts_always is present.
			- Begin framework for tile-based batching
				Current batching design:
				|   [ 1 2 3 ]
				|	[ 1 ]
				|	[ 1 2 3 4 ]
				|	  | \ \ \
				|	  |  --------------------------------\  
				V	  |            \          \           \
					[ THREAD 1 ][ THREAD 2 ][ THREAD 3 ][ THREAD 4 ]
				Proposed redesign:
				|	[ 1 ]
				|	[ 1 2 ]
				|	- tile 1  tile 2  task 3 -
				|   [ [ 1 2 ] [ 3 4 ] [ 5 ] ]
				|		|		\ \		\-------------------\
				|		|		 \ \--------------\			 \
				|		V		 V				   V		  V
				|   [ THREAD 1 ][ THREAD 2 ][ THREAD 3 ][ THREAD 4 ]
				V
				Required extensions:
					- EEX_tile_batching
					- EEX_executor_task_flags_v1
					- EEX_tile_batching_flags_v1
		L2 (Planned)
			- Finish tile-based batching and implement it.