﻿<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<title>Too many open file</title>
<meta name="GENERATOR" content="WinCHM">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">

</head>

<body>
<P><FONT face=Courier>There are multiple places where Linux can have limits on 
the number of file descriptors you are allowed to open.</FONT></P>
<P><FONT face=Courier>You can check the following:</FONT></P>
<P><FONT face=Courier>That will give you the system wide limits of file 
descriptors.</FONT></P>
<P><FONT face=Courier>On the shell level, this will tell you your personal 
limit:</FONT></P>
<P><BR><FONT face=Courier>However, if you're closing your sockets correctly, you 
shouldn't receive this unless you're opening a lot of simulataneous connections. 
It sounds like something is preventing your sockets from being closed 
appropriately. I would verify that they are being handled properly.</FONT></P>
<P><BR><FONT face=Courier>I had similar problem. Quick solution is :</FONT></P>
<P><FONT face=Courier><STRONG>ulimit 
-n 4096</STRONG>  </FONT></P>
<P><FONT face=Courier>explanation is 
as follows - each server connection is a file descriptor. In CentOS, Redhat and 
Fedora, probably others, file user limit is 1024 - no idea why. It can be easily 
seen when you type: <STRONG>ulimit -n</STRONG>          
            
               </FONT></P>
<P><FONT face=Courier>Note this has no 
much relation to system max files 
<STRONG><EM>(/proc/sys/fs/file-max</EM></STRONG>          
).</FONT></P>
<P><FONT face=Courier>In my case it was problem with Redis, so I did:</FONT></P>
<P><FONT face=Courier><STRONG>ulimit 
-n 4096 redis-server -c xxxx</STRONG>     </FONT></P>
<P><FONT face=Courier>in your case instead of redis, you need to start your 
server.</FONT></P></body>
</html>
