text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringlengths
9
15
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
java.lang.Object org.netlib.lapack.SGTTRSorg.netlib.lapack.SGTTRS public class SGTTRS SGTTRS is a simplified interface to the JLAPACK routine sgttGTTRS solves one of the systems of equations * A*X = B or A'*X = B, * with a tridiagonal matrix A using the LU factorization computed * by SGTTRF. * * Arguments * ========= * * TRANS (input) CHARACTER * Specifies the form of the system of equations. * = 'N': A * X = B (No transpose) * = 'T': A'* X = B (Transpose) * = 'C': A'* X = B (Conjugate transpose = Transpose) * * N (input) INTEGER * The order of the matrix A. * * NRHS (input) INTEGER * The number of right hand sides, i.e., the number of columns * of the matrix B. NRHS >= 0. * * DL (input) REAL array, dimension (N-1) * The (n-1) multipliers that define the matrix L from the * LU factorization of A. * * D (input) REAL array, dimension (N) * The n diagonal elements of the upper triangular matrix U from * the LU factorization of A. * * DU (input) REAL array, dimension (N-1) * The (n-1) elements of the first super-diagonal of U. * * DU2 (input) REAL array, dimension (N-2) * The (n-2) elements of the second super-diagonal. * * B (input/output) REAL array, dimension (LDB,NRHS) * On entry, the matrix of right hand side vectors B. * On exit, B is overwritten by the solution vectors X. * * LDB (input) INTEGER * The leading dimension of the array B. LDB >= max(1,N). * * INFO (output) INTEGER * = 0: successful exit * < 0: if INFO = -i, the i-th argument had an illegal value * * ===================================================================== * * .. Local Scalars .. public SGTTRS() public static void SGTTRS(java.lang.String trans, int n, int nrhs, float[] dl, float[] d, float[] du, float[] du2, int[] ipiv, float[][] b, intW info)
http://icl.cs.utk.edu/projectsfiles/f2j/javadoc/org/netlib/lapack/SGTTRS.html
CC-MAIN-2017-51
refinedweb
277
54.52
rpc_ns_mgmt_entry_delete- deletes an entry from the name service database #include <dce/rpc.h> void rpc_ns_mgmt_entry_delete( delete._not_rpc_entry Not an RPC entry. - rpc_s_unsupported_name_syntax Unsupported name syntax. The rpc_ns_mgmt_entry_delete() routine removes an RPC entry from the name service database. - Note: - Management applications use this routine only when an entry is no longer needed, such as when a server is permanently removed from service. If the entry is a member of a group or profile, it must also be deleted from the group or profile. Permissions Required The application needs read permission for the target name service entry. The application also needs delete permission for the entry or for the parent directory. None. rpc_ns_mgmt_entry_create(). Please note that the html version of this specification may contain formatting aberrations. The definitive version is available as an electronic publication on CD-ROM from The Open Group.
http://pubs.opengroup.org/onlinepubs/9629399/rpc_ns_mgmt_entry_delete.htm
CC-MAIN-2014-15
refinedweb
140
50.43
I had read the RC reciever part of your blog, but that isn't what I am trying to do right? #include <Servo.h>Servo myservo;int val = 50;void arm(){ // arm the speed controller Serial.println("Arming"); setSpeed(50); delay(2000); //delay 2 second}void setSpeed(int speed){ // speed is from 0 to 100 where 0 is off and 100 is maximum speed //the following maps speed values of 0-100 to angles from 0-180, int angle = map(speed, 0, 100, 0, 180); myservo.write(angle); }void setup(){ myservo.attach(8); arm(); //setup serial communication Serial.begin(9600); }void loop(){ while(!Serial.available()==0) { val = Serial.read() - '0'; val = val * 10; setSpeed(val); Serial.println(val); } } Please enter a valid email to subscribe We need to confirm your email address. To complete the subscription, please click the link in the Thank you for subscribing! Arduino via Egeo 16 Torino, 10131 Italy
http://forum.arduino.cc/index.php?topic=86883.msg691449
CC-MAIN-2016-40
refinedweb
153
59.8
Here user given a string and user also given the word to count the number of occurrences. Our task is to count the number of occurrences and print it. Input: Python is an interpreted high-level programming language for general purpose programming. Enter the word to count occurrence ::>programming Output:: 2 wordoccurences(n,p) /* n is input string and p is the word to count occurrence */ Step 1: split the string by space Step 2: we use one counter variable c and it’s initialized by 0 and if word is match then c is incremented by 1. Step 3: then we searching using for loop. Step 4: if condition is true then counter c is increased and display the final result. def wordoccurences(n, p): x = n.split(" ") c = 0 for i in range(0, len(x)): # if match found increase count if (p == x[i]): c = c + 1 return c # Driver code n=input("Enter String ::>") p=input("Enter the word to count occurrence ::>") print("THE NUMBER OF OCCURRENCE OF A WORD ",p,"is",wordoccurences(n, p)) # To count the number of occurrence of a word in the given string Enter String ::>python is an interpreted high level programming language for general purpose programming Enter the word to count occurrence ::>programming THE NUMBER OF OCCURRENCE OF A WORD programming is 2
https://www.tutorialspoint.com/write-a-python-program-to-count-occurrences-of-a-word-in-string
CC-MAIN-2021-04
refinedweb
223
53.75
> In this post well be discussing lots of ways to tune web servers and proxies. Please do not cargo-cult them. For the sake of the scientific method, apply them one-by-one, measure their effect, and decide whether they are indeed useful in your environment. Far to often I see people apply ideas from posts they've read or talks they've seen without stopping to think whether or not it makes sense in the context they're applying it. Always think about context, and measure to make sure it actually works! And just for reference, AWS does provide enhanced networking capabilities on VPC:-... I observe, though, that if you are tuning a system to this level of detail you likely have a number of web servers behind a load balancer. To be complete, the discussion should include optimization of interactions with the load balancer, e.g. where to terminate https, etc. Let's say I want to prepare a server to respond quickly to HTTP requests, from all over the world. How do I optimize where I put it? Generally there are three ways I can tackle this: 1. I configure/install my own server somewhere 2. Or rent a preconfigured dedicated server I can only do so much with 3. I rent Xen/KVM on a hopefully not-overcrowded/oversold host Obviously the 1st is the most expensive (I must own my own hardware; failures mean a trip to the DC or smart hands), the 2nd will remove some flexibility, and the 3rd will impose the most restrictions but be the cheapest. For reference, knowing how to pick a good network (#1) would be interesting to learn about. I've always been curious about that, although I don't exactly have anything to rack right now. Are there any physical locations in the world that will offer the lowest latency to the highest number of users? Do some providers have connections to better backbones? Etc. #2 is not impossible - currently lists a HP SL170s with dual L5360s, 24GB, 2TB and 20TB bandwidth @ 1Gbit for $50/mo. It's cool to know this kind of thing exists. But I don't know how good Delimiter's network(s) is/are (this is in Atlanta FWIW). #3 is what I'm the most interested in at this point, although this option does present the biggest challenge. Overselling is a tricky proposition. Hosting seems to be typically sold on the basis of how fast `dd` finishes (which is an atrocious and utterly wrong benchmark - most tests dd /dev/zero to a disk file, which will go through the disk cache). Not many people seem to setup a tuned Web server and then run ab or httperf on it from a remote with known-excellent networking. That's incredibly sad! Handling gaming or voice traffic is probably a good idea for the target I'd like to be able to hit - I don't want to do precisely that, but if my server's latency is good enough to handle that I'd be very happy. If someone can point me to a thorough article like this on the lua module, I will thank her/him forever. When I've previously tuned a server I have used both of those to my advantage... Another comment on here talked about this ignoring an existing load balancer so maybe those sysctls are more appropriate on an LB? It would be nice if someone make a docker image with all the tuning set (except the hardware) It would have be nicer, if the author has shown what the end result of this optimization looks like, with numbers, comparing against a standard run-of-the-mill nginx setup. If you just render the geometry, its pretty, but it doesnt. Hes a magician. (..., section "Painting with Code") He contributed some basic lessons about it to Khan Academy's "Pixar in a Box" section on Environment modeling (..., direct link to first video:). Also worth checking out is "Elevated", one of the coolest 4066-byte programs you'll ever see:. This page has a great cache of formulae for various primitives, patterns and blend functions: The boolean and domain stuff there blew my mind. It's incredible how concise the descriptions are - goes a long way to explaining some of the more magical stuff that came out of the Shaderlab/Demoscene world. Question - are many people using SDF/Ray marching in games outside of very niche uses? (clouds etc)? The performance benefits only seem to kick in for very specific applications but I'd like to see whole levels/environments built like this. Maybe as the current gen of GPUs drops in price and become ubiquitous it will become a more feasible approach all round. PS If anyone has access to a Rift (or a Vive via ReVive with a bit of fiddling) try out It's a lot of Shaderlab stuff in VR and some of it is truly astonishing in a headset. (I know you can run the website itself in WebVR but it's clunky as hell and performance isn't great) Oh, no, sir. This wasn't built by a magician. This was built by a wizard. A man who can actually do what magicians pretend to do. He does things that I wouldn't have dreamed possible with fragment shaders and implicit geometry. IQ is a wizard and really inspired my interest in programming and math. [1] *Cheaper, safer, more efficient, requiring less maintenance, etc... Maybe the economic incentive of balls just "rolling away" (thus remaining recoverable) in the event of a pipeline/tanker/carriage leak could balance this? - How solid are those pebbles? I assume they're not like soft blobs that can easily split and merge together? But then how much abuse they can take? E.g. if they crack easily, you can't really stack them together very high. - The obvious one - are the pebbles flammable? So as a TypeScript guy with AssemblyScript at my fingertips, what doors does that open for me? I occasionally have to make HTML5 games in Canvas. Is this the kind of path where WebAssembly could be beneficial? One day will there simply be an end build step to turn everything into web assembly or is it never intended for use with the DOM? If I am not using this, what are the other languages I can use today that compile down to WASM? - ASM.js : basically assembly with a subset of javascript syntax - WebAssembly : assembly with a different format which doesn't require parsing javascript. - languages built on top of WebAssembly : C/C++ like languages with explicit memory management/ no garbage collection. - assemblyscript : C with a typescript syntax ? Some of the resistance to action on climate change comes from the notion that the cost is unknown, which in turn stems from the fact, that to date there was no scalable way to undo the emissions. Now that a way has been demonstrated emitters can choose whether they want to reduce their emissions or undo them(assuming the right legislation forcing them to do so is in place). I believe most will go with the former option until it stops being economical. My running bet/prediction is that 1) the imbroglio in America on climate denial will continue, 2) most of the world will feel happy that they are doing less than it should, but more than the stat 3) Eventually someone will sell the idea of !!GEO ENGINEERING!! to the same people they feed all sorts of Denialism. This will sell well because it has "WE'LL MAKE THE WORLD GREAT AGAIN! WITH HARD WORK AND GUMPTION", and will of course be subsidized by the Government so it will have "JOB CREATION!" written all over it. I could write a satirical Ad for it today, and be assured that its twin will play for real in 20 years. While this comes across as deeply cynical (it isn't cynical enough), it's based on the debacle that is Environmental Protection, from before I started reading news papers in the 1980s. The news today may be dominated by what America is doing, but lets not forget that it was and still is a MASSIVE uphill struggle to get people to care for decades. And I am ignoring the effects of funding into Climate Denialism , and FUD campaigns. In the end people are going to always choose themselves over the environment. In a world where clean coal can be marketed, "Climate Engineering" sounds like an entire industry waiting to be born. sorry, I wish I had something more optimistic to say. For a comparison the per capita CO emissions per capita in the US are 20 tons. With an average household size of 2.58 you would need one of those plants + CO storage facilities per household. ATM, the debate is (1)"is the carbon-climate problem real?" and (2) "If so, how do we reduce carbon emissions?". "Climate Engineering" would complicate 2 a lot, probably splitting political support. Politically, they open the door to continued or increased carbon emmissions today. They let us avoid addressing the "root cause." This is all regardless of timelines or progress. Just the existence of the idea in public debate might be enough to swing the opinion balance. I imagine there are some who would ideally like to keep geoengineering as a quite Plan B so as not to disturb current efforts, but that's not really possible. Overall, I think these ideas (if actually viable) will inevitably enter the discourse in the long term. We are already engineering in some sense. We have models and targets for both carbon and temperature. IE, we want to take control (to a small extent) of the climate. That's engineering, and engineers always look for more tools eventually. Compiles toCompiles to #include <stdint.h> uint64_t rol(uint64_t n, uint64_t val) { n &= 63; return (val << n) | (val >> 64-n); } Clang and ICC do get it right.Clang and ICC do get it right. rol(unsigned long, unsigned long): mov rcx, rdi mov rax, rsi and ecx, 63 rol rax, cl ret A lot of these suboptimal examples come down to the complexity of the optimization problem. Compilers tend to use heuristics to come up with "generally good enough" solutions in the optimizers instead of using a longer and computationally more expensive foray into the solution space. Register allocation is a prime example. This is an NP-Hard problem. Plenty of heuristics exist for finding "generally good enough" solutions, but without exhausting the search space, it typically isn't possible to select the most optimal solution, or even to determine whether a given solution is optimal. Couple this with the tight execution times demanded for compilers, and issues like these become pretty common. Even missed strength reduction opportunities, such as eliminating unneeded spills or initialization, can come down to poor heuristics. It's possible to write better optimizer code, but this can come at the cost of execution time for the compiler. Hence, faster is often chosen over better. In first-tier platforms like ix86 and x86_64, enough examples and eyes have tweaked many of the heuristics so that "generally good enough" covers a pretty wide area. As someone who writes plenty of firmware, I can tell you that it's still pretty common to have to hand-optimize machine code in tight areas in order to get the best trade-off between size, speed, and specific timing requirements. A good firmware engineer knows when to trust the compiler and when not to. Some of this comes down to profiling, and some comes down to constraints and experience. Then, there are areas in which compilers typically rarely produce better code than humans. Crypto is one example. Crypto code written in languages like C can break in subtle ways, from opening timing oracles and other side-channel attacks to sometimes getting the wrong result when assumptions made by the developer and the optimizer are at odds. In these cases, hand-written assembler -- even in first tier platforms -- tends to be both faster and safer, if the developer knows what he/she is doing. "Missed simplification of multiplication by integer-valued floating-point constant Variant of the above code with the constant changed slightly: int N;int fn5(int p1, int p2) { int a = p2; if (N) a = 10.0; return a;}GCC converts a to double and back as above, but the result must be the same as simply multiplying by the integer 10. Clang realizes this and generates an integer multiply, removing all floating-point operations." A double or float literal multiply followed by an integer conversion is nowhere near the same as an integer literal multiply. If the coder wanted = 10 (or even = 10.0f), that was available. If = 10.0 was written, it should generally be compiled that way unless --superfast-wreck-floating-point was turned on... So far I can can find online, this piece is the only thing he has ever published about the Voynich manuscript: Who is Nicholas Gibbs? Does anyone besides Nicholas Gibbs trust his opinion on these matters? And how did he convince the TLS to publish this drivel? (to avoid being entirely negative, here's a link to a blog that shows what some better Voynich research looks like:) tl;dr: it's probably real writing, likely related to Roma/Syriac That image is titled p16_Gibbs1.jpg. To me that hints that the author is serious and is planning to release a detailed paper. His final statement at the end of the article is really bold. "Not only is the manuscript incomplete, but its folios are in the wrong order and all for the want of an index." Perhaps the author is going to provide the index, and the correct order for the folios while providing what he believes to be the missing pieces from other texts from that time period? This article looks like a teaser to me for something significant. Let's hope anyway. In all I am surprised more progress has not been made since the advent of the internet and its crowd-sourcing potential. There is definitely no shortage of interpretations all over the internet, and in headlines from time to time. The last one I recall from a couple of months ago suggested that there was a specific Jewish birthing practice being illustrated on one of the pages that suggested a certain origin of the text. [2] [1]... [2]... It would be good to see a thorough study of it to test the author's hypothesis, of course. As best as I can read, the purported Latin translation in the image at the top of the article says: Folia de oz et en de aqua et de radicts de aromaticus ana 3 de seminis ana 2 et de radicis semenis ana 1 etium abonenticus confundo. Folia et cum folia et confundo etiam de eius decocole adigo aromaticus decocque de decoctio adigo aromaticus et confundo et de radicis seminis ana 3. Feeding the above to Google Translate gives: The leaves of Oz and added to the water and the aromatic radicts semen Ana ana 3 2 seed and the roots ana 1 etium abonenticus the mix. The leaves, when the leaves are decocole adigo and the mix of the aromatic decocque of the cooking adigo an aromatic mix of roots and seeds Ana 3. Yes, I realize that the author's translation might be completely mistaken, but I'm curious to read what he thinks it says. If someone can make out the words better, please do so. [1] [2] Some really interesting analyses in there. By now, it was more or less clear what the Voynich manuscript is: a reference book of selected remedies lifted from the standard treatises of the medieval period, an instruction manual for the health and well being of the more well to do women in society, which was quite possibly tailored to a single individual. Interesting stuff! Also, just to be clear, I do not (and never did) hold any hard feelings towards the recruiter; in fact, it was very kind of them to point out why I was not qualified in the first place. This has been probably the most reflective of how I let my ego get the best of me at times, and I hope it might serve as a warning to those who might be tempted to do the same "devsplaining" in similar situations. Please let me know if you have any other criticisms beyond the ones already voiced in this thread. I'm reading through the comments here as I can, and it's been a lot of good advice. Thanks again. You: Yes, I'm a qualified plumber and can do the job. Recruiter: Sure, but can you fix our pipes? You: Off course, that is is what I was trained to do. Recruiter: You keep saying you're a plumber but we need someone to fix our pipes You: I can do it. Recruiter: we need someone who have worked with pipes. You: I have worked with those. Recruiter: Sure, but we need someone to fix our pipes. etc --? If you are lucky, maybe you have some friends at Facebook that can intervene on your behalf. If not, there's other companies. I had friends at Google to help me get an answer. But I decided I never wanted to work for Google based on how callously they treated me during the interview process. It became clear to me that I was not a high priority person to them ... just a fungible commodity. This is true, but nobody likes to be shown the truth of their value like that. I knew other people with other talents that were treated really well by Google. My skillset was not in that high demand ... or there were plenty of other candidates. But I felt like Google did not need to treat me like garbage. After that I went and joined a startup. Quite happy now. We stole a few engineers from Google even :) I always make sure we get back to our interview candidates as quickly as possible. I won't let us turn into a callous Google. The student is representing only himself. The recruiter is representing the company. When a recruiter mishandles a situation so massively as was the case here, it puts Facebook in a poor light. Obviously, Facebook's engineering teams are well aware of what all of this technology is, but it is distasteful to see this complete lack of understanding and know that I might have to deal with it if I were to work with one of their recruiters. I'm not saying the industry in general is better than this, but it would have taken the recruiter all of 30 seconds to draft an email to the hiring manager and ask for clarification - "This guy is really insistent on how he has POSIX and Linux experience and that this will be okay. What do?" Also, putting Unix on the required skills for an intern position? What did they think was going to happen? - Keep messages simple. Less information to process is better. - If the response seems strange, start by assuming miscommunication and misunderstanding. Do not respond with more complicated information. - When in doubt, say what they are expecting to hear and sort it out later when you talk voice or in person. A more fruitful exchange would be: Recruiter: We require having UNIX experience. If you have it could you update your resume and resend? Candidate: Resending with UNIX experience written in. Recruiter: Thanks! We'll be contacting you shortly. The proper response here is:"I have experience in Linux and Unix. Updated resume is attached." Done. You need to treat them like he's your old uncle. Use minimal technical words, repeat what they ask/say and most importantly respect them. * The name "UNIX" or "Unix" is trademarked. * AIX, Solaris and HP-UX are certified Unix systems. These are bonafide Unix systems and there indeed are software developers today who work with these Unix systems.[2] * The set of Unix-like systems is a superset of the set of Unix systems. The set of Unix-like systems include systems like Linux, FreeBSD, etc. which the set of certified Unix systems do not. [1]: [2]:... Any chance someone on here that was deeply technical transitioned into recruiting? Will you spill the beans like compensation ranges, per head bonus/commission, and satisfaction? As an engineering manager I've done all my own recruiting and have been recognized by my management chain for doing a stand up job at it. I know several socialites from High School that are not or barely technical that outwardly seem to be earning a lot of money doing recruiting or contract agency talent management. It seems like way less hours and stress than I've put in to become a systems expert. Maybe that wont last forever with economic waves, but then again neither do a lot of tech jobs. Knowing the person you're talking to helps greatly. The average recruiter has a high school certificate and that's it. They're hired to do largely manual work comparing skills on resumes to skills on job positions. Now that you know how they work you should ask yourself what's your goal? Is it to get that internship at Facebook? If so then how can I write my resume and cover letter to help me get the recruiters attention? Put the skills from the job listing on your damn resume. Personally I'd dislike working with this guy. I can tell he's a smart guy but he's misdirecting his intelligence. No idea if it was a typo for C# or the recruiter thought, we have C and C++, why not add C+ in there to increase our hits. I found it amusing, but didn't start an argument.. Every OS X from 10.5 on except 10.7 has been certified under Version 3 of the Single UNIX Specification, and thus is officially considered to be UNIX. General example: Youtube:... Still, it's an interesting library, and has its uses on certain niche sites (or for easter eggs). Just don't use it by default on a business page or anything you want people to take seriously. edit: the track appears to be "Kinetic (The Crystal Method vs Dada Life)" <> edit: didn't see the "start demo" button... It's still intended to run behind Nginx web server (or some other web server), much like you'd run something like PHP-FPM behind a web server. I guess it lost momentum when ZeroMQ did. Anyone know why? Sounds like a dream solution in the current microservice hype. why do people always use "not recommended" when they actually mean "do not ever do this or you'll end up the laughing stock in the tech press" Exposing this otherwise awesome API to the public will amount to a free RCE for everybody. So not ever expose this to the public, not even behind some authentication. It's very cool that by design it's only listening on a domain socket. Don't add a proxy in front of this. There are a couple of options I'd like to see added to the Python configuration though before I could try it: - Ability to point it at a virtualenv. - Ability to set environment variables for the application. I think a "how it works" or "design doc" would be really helpful. That said, the source files do make for pleasant reading. The nginx team has always set a strong example for what good C programming looks like. EDIT: Their blog post [0] makes this more clear... nginx unit is one of four parts in their new "nginx application platform" [1] [0]... [1] Does it do WSGI then? Did they write the equivalent of mod_wsgi? The rest of the headline cleared it up of course, but I was curious for a minute how that would look. EDIT: When discussing a new product, I would think the name is a fair point of discussion. Furthermore after this thread's title changed, it now requires a clickthrough to dispel similar misunderstandings.
http://hackerbra.in/news/1504778521
CC-MAIN-2018-43
refinedweb
4,068
71.75
Hi mates, I am trying to test OpenConfig/NetConf with YANG Development Kit on my local test bed. The IOS-XRv 6.1.2 nodes along with Ubuntu Servers are mounted on vSphere host and the proper connectivity. I followed the steps to install YDK-Gen along with the YDK-Py as listed on this site : So far I haven't been able to start the ydk services properly as shown below: nextcomm@ntc-ubuntu-server-3:~/automation/ydk-py-master/ydk-py-samples$ python hello-ydk.py Traceback (most recent call last): File "hello-ydk.py", line 23, in <module> from ydk.services import CRUDService ImportError: No module named services Any advice or tip will be appreciated. Thanks, JM Based on the error, it seems ydk-py is not installed correctly. Could your ydk-py be installed in a different python environment? Hi abhirame, Thanks for the quick response. I used the following steps to install YDK Packages on my Ubuntu Server 14.06. Am I missing something ? I will ping you (jngokgwe) Were you able to get your installation working? Yes it worked very well on Ubuntu. Please refer to this link for reference : Hi Jean-Marie The link is having syntax errors. Have you tried to install the latest YDK release 0.8.2? What is the platform that you are having installation issues? Currently supported Ubuntu releases are Xenial and Bionic. Any considerations for you to upgrade? Thanks, Yan
https://community.cisco.com/t5/yang-development-kit-ydk/couldn-t-start-crud-services/td-p/3441878
CC-MAIN-2021-04
refinedweb
243
60.01
GETTID(2) Linux Programmer's Manual GETTID(2) gettid - get thread identification #include <sys/types.h> pid_t gettid(void); Note: There is no glibc wrapper for this system call; see NOTES.). On success, returns the thread ID of the calling process. This call is always successful. The gettid() system call first appeared on Linux in kernel 2.4.11. gettid() is Linux-specific and should not be used in programs that are intended to be portable.()). capget(2), clone(2), fcntl(2), fork(2), get_robust_list(2), ioprio_set(2), perf_event_open(2), sched_setaffinity(2), sched_setparam(2), sched_setscheduler(2), tgkill(2), timer_create(2) This page is part of release 4.12 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Linux 2015-07-23 GETTID(2) Pages that refer to this page: kill(1), capget(2), clone(2), fcntl(2), futex(2), ioprio_set(2), ptrace(2), sched_setaffinity(2), sched_setscheduler(2), seccomp(2), set_tid_address(2), syscalls(2), timer_create(2), tkill(2), pthread_self(3), sd_event_new(3), proc(5), cgroups(7), cpuset(7), pthreads(7)
http://man7.org/linux/man-pages/man2/gettid.2.html
CC-MAIN-2017-30
refinedweb
186
57.27
Eclipse Community Forums - RDF feed Eclipse Community Forums EclipseLink project works in one Eclipse Workspace but not in another <![CDATA[Hi, I have a very small EclipseLink example which runs perfectly in a workspace. I imported this example in another workspace (same Eclipse installation therefore same version, available plug-ins, etc.) and it did not work. The following message was issued: Exception in thread "main" javax.persistence.PersistenceException: No Persistence provider for EntityManager named people at javax.persistence.Persistence.createEntityManagerFactory(Per sistence.java:89) at javax.persistence.Persistence.createEntityManagerFactory(Per sistence.java:60) at de.vogella.jpa.eclipselink.main.Main.main(Main.java:18) After lots of checking the only difference I could find is that the second workspace is in a deep folder structure, something like: C:\Documents and Settings\vogella\Desktop\Documents\08_MyDocuments\20_Documen tation\JPAPersistenceExamples\de.vogella.jpa.eclipselink2\sr c\META-INF I imported the project into a thrid workspace with a shorter path (again same Eclipse installation) and it did work. Are there any restrictions in relation to the path name or are there any other reasons why the project does not run in the second workspace? Best regards, Lars -- Lars Vogel - Tutorials about Eclipse ml - Eclipse RCP Tutorial]]> Lars Vogel Unavailable until 15 August 2016 2009-01-10T00:03:27-00:00 Re: EclipseLink project works in one Eclipse Workspace but not in another <![CDATA[The failing project has spaces in the path. We have fixed a similar issue in 1.1 and I believe 1.0.2. What version of EclipseLink are you using. --Gordon]]> Gordon Yorke 2009-01-12T01:00:08-00:00 Re: EclipseLink project works in one Eclipse Workspace but not in another <![CDATA[Hi Gordon, For this example I'm using eclipselink-1.0.2. I did try this example in a workspace which had also spaces (but a shorter path name) and it worked. The path in which the example is working: is C:\Documents and Settings\vogella\Desktop\Documents\08_MyDocuments\14_Eclipse _Test_Path\de.vogella.jpa.eclipselink The none working path is: C:\Documents and Settings\vogella\Desktop\Documents\08_MyDocuments\20_Documen tation\JPAPersistenceExamples\de.vogella.jpa.eclipselink Best regards, Lars Gordon Yorke wrote: > The failing project has spaces in the path. We have fixed a similar > issue in 1.1 and I believe 1.0.2. > > What version of EclipseLink are you using. > --Gordon > -- Lars Vogel - Tutorials about Eclipse ml - Eclipse RCP Tutorial]]> Lars Vogel Unavailable until 15 August 2016 2009-01-12T10:36:45-00:00 Re: EclipseLink project works in one Eclipse Workspace but not in another <![CDATA[Hi, this seesm to be a bug in the Java Classloader (Java 6 Update 11). I created the following example to demonstrate it: --------------- package de.vogella.jpa.bug; import java.io.IOException; import java.net.URL; import java.util.Enumeration; public class Test { public static void main(String[] args) { ClassLoader currentLoader = Thread.currentThread() .getContextClassLoader(); try { Enumeration<URL> resources = currentLoader .getResources("META-INF/persistence.xml"); while (resources.hasMoreElements()) { System.out.println("Elements found"); resources.nextElement(); } } catch (IOException e) { e.printStackTrace(); } } } ---------- Running this in a project with a medium sized path name delivers a hit. Running it in a project with a long path name return nothing. I'll open a bug at Sun. Best regards, Lars]]> Lars Vogel Unavailable until 15 August 2016 2009-01-12T20:44:55-00:00 Re: EclipseLink project works in one Eclipse Workspace but not in another <![CDATA[Lars, Hi, just a reference about 1.6 releases to keep in mind. The implementation of releases post 1.6.0_07 differ in their ClassLoader behavior. We had a classLoader issue in SDO recently where when we loaded >5000 classes the JVM slowed down by a factor of 300+. It turned out that this behavior only appeared from 1.6.0_01 to 1.6.0_07, when we used a 1.5 JRE or the 1.6.0_10 JRE there was no issue. It looked like the ClassLoader implementation in the 1.6.0_10+ JRE's may use a different improved HotSpot JVM. See our tracking bug... For the SUN bug... Duplicated to thank you /michael]]> Michael OBrien 2009-01-14T16:26:02-00:00
http://www.eclipse.org/forums/feed.php?mode=m&th=125632&basic=1
CC-MAIN-2017-17
refinedweb
698
52.76
Add Custom Settings to your Theme ARCHIVED This chapter has not been updated for the current version of Orchard, and has been ARCHIVED. When building themes in Orchard, it is often desirable to make certain aspects of the theme available to the administrator to customize. This article describes how to add several simple theme settings to the Site Settings menu in the Orchard dashboard. Creating the Theme First thing we will need is a theme, so let's go ahead and use the codegen command to generate a theme feature enable Orchard.CodeGeneration Now we can generate a theme that we will base on TheThemeMachine so that we have a basic theme to work on. We will also need to create a project file for this theme so that we can add our settings. This is the key to creating settings in our themes because by making our theme a C# project, our themes can do almost anything a module can do. codegen theme CleanBlog /BasedOn:TheThemeMachine /CreateProject:true Defining the Settings Visual Studio should now prompt you to reload the solution so you can get started with your new theme. If it doesn't right click on your themes folder in the Solution Explorer and Add -> Existing Project then select your new themes project file (it will be located in the Themes folder). These will be the two settings added to our theme: - Add a custom class to the main layout - Load a custom style sheet based on the users selection We are going to attach a new part to the Site content type to store these custom theme settings. So let's create a Models folder and add a file called CleanBlogSettingsPart.cs. using Orchard.ContentManagement; namespace CleanBlog.Models { public class CleanBlogSettingsPart : ContentPart { public string CustomClass { get { return this.Retrieve(x => x.CustomClass); } set { this.Store(x => x.CustomClass, value); } } public string HoverColor { get { return this.Retrieve(x => x.HoverColor, "custom_blue.css"); } set { this.Store(x => x.HoverColor, value); } } } } This example is using the InfoSet storage that was made available in 1.8 (it is perfectly possible to achieve the same in older versions of Orchard using the ContentPartRecord storage method and building the table in the migrations, see this for more details). The "custom_blue.css" parameter in the getter for HoverColor is the default value for that setting. We will be offering them the option to choose from 3 different colours to have their links, so let's create three style sheets in the Styles folder of our theme: - custom_blue.css - custom_green.css - custom_yellow.css Add the following to each style sheet and adjust the colour to match the file. a:hover { color: *color*; } So custom_blue.css will look like this: a:hover { color: blue; } Setting up the Editor Now we will need an editor to select these options. Create a file in ~/Views/EditorTemplates/Parts called CleanBlogSettingsPart.cshtm. @model CleanBlog.Models.CleanBlogSettingsPart @{ var colourScheme = new List<SelectListItem>(); colourScheme.Add(new SelectListItem { Text = "Blue", Value = "custom_blue.css" }); colourScheme.Add(new SelectListItem { Text = "Green", Value = "custom_green.css" }); colourScheme.Add(new SelectListItem { Text = "Yellow", Value = "custom_yellow.css" }); } <fieldset> <legend>Clean Blog Settings</legend> <div> @Html.LabelFor(m => m.CustomClass, T("Custom Class")) @Html.EditorFor(m => m.CustomClass) </div> <div> @Html.LabelFor(m => m.HoverColor, T("Hover Color")) @Html.DropDownListFor(m => m.HoverColor, colourScheme.AsEnumerable()) </div> </fieldset> To handle the display of this view and attach the settings to the Site content type, we will use a ContentHandler. So create a folder called Handlers in the root of your theme with a file called CleanBlogSettingsPartHandler.cs. using Orchard.ContentManagement.Handlers; using Orchard.ContentManagement; using Orchard.Localization; using CleanBlog.Models; namespace CleanBlog.Handlers { public class CleanBlogSettingsPartHandler : ContentHandler { public CleanBlogSettingsPartHandler() { Filters.Add(new ActivatingFilter<CleanBlogSettingsPart>("Site")); Filters.Add(new TemplateFilterForPart<CleanBlogSettingsPart>("CleanBlogSettingsPart", "Parts/CleanBlogSettingsPart", "Theme")); T = NullLocalizer.Instance; } public Localizer T { get; set; } protected override void GetItemMetadata(GetContentItemMetadataContext context) { if (context.ContentItem.ContentType != "Site") return; base.GetItemMetadata(context); context.Metadata.EditorGroupInfo.Add(new GroupInfo(T("Theme"))); } } } There are a few things of note here. We don't want to just display our themes settings in the main settings menu, but in its own subsection called "Theme". You can name this subsection whatever you like. But if you do want your settings to be in the main settings page, just remove the 'GetItemMetadata()' method and change this line: Filters.Add(new TemplateFilterForPart<CleanBlogSettingsPart>("CleanBlogSettingsPart", "Parts/CleanBlogSettingsPart", "Theme")); to: Filters.Add(new TemplateFilterForPart<CleanBlogSettingsPart>("CleanBlogSettingsPart", "Parts/CleanBlogSettingsPart")); You'll also notice that this line is defining what template to use for rendering our settings editor. You may be wondering why we don’t just use a driver like we usually do for a parts editor template. This is due to the fact that to call the Editor method of your Driver and return a shape, you need to have specified in the Placement.info that you want to do that. Since this theme is not active in the admin section of Orchard, our themes Placement.info file is never run hence a Driver would never display anything. Finally, the line: Filters.Add(new ActivatingFilter<CleanBlogSettingsPart>("Site")); Is what attaches our part to the Site content type. Accessing the Theme Settings All that is left is to do now is to actually make use of our theme settings. Let's copy the Layout.cshtml file from TheThemeMachine into our CleanBlog theme. This means our theme will now use our Layout file instead of the base Layout from TheThemeMachine. Accessing our settings is as simple as: var settings = WorkContext.CurrentSite.As<CleanBlogSettingsPart>(); WorkContext is effectively an extension of the HttpContext, containing additional information about Orchard, such as the current user and current site (which you can see us doing above). This gives us access to the site content item, meaning we can access any parts that we have attached to the Site content type. We will need to add two using statements into our view to resolve the .As<> extension and our model. @using CleanBlog.Models @using Orchard.ContentManagement So now we have our settings in the view we can actually make use of them. Anywhere after the line Style.Include("Site.css"); add the line Style.Include(settings.HoverColor); This will load the selected stylesheet after the main sheet and apply our overriding colour scheme. Our final setting is to add a custom class to the main div element. This element is generated by the Tag method. This is an implementation of C#'s TagBuilder class. It takes a dynamic shape object and a tag name and builds that tag with all the attributes (id, classes and additional attributes) gleaned from the shape that was passed in. So we can add our class to the Model so our class will be added to the rendered tag like so: Model.Classes.Add(settings.CustomClass); Wrapping Up The real power here is that themes in Orchard don't just have to be templates and stylesheets, they can be fully-fledged projects that run C# code outside of views; basically do anything a module can do. Here we saw just one way to utilize this power: modify the look and feel of your theme from the dashboard without having to change any HTML or CSS
http://docs.orchardproject.net/en/latest/Documentation/Add-Custom-Settings-to-your-Theme/
CC-MAIN-2017-26
refinedweb
1,207
57.27
- You've created a JAR library whose source code doesn't reference Android-specific APIs, nor does it reference standard Java APIs newer than Java 5, except for the few Java 6 enhancements to Android's version of the java.io.File class (and maybe other Java 6 enhancements that are present in Android). Furthermore, the source code doesn't reference any language features introduced after Java 5. You decide to incorporate this library into the app that you're building by copying its JAR file into your project's libs directory (which is located underneath the project home directory). You compile the source code and let Android merge the library into the resulting app, and note a successful build. You then install the APK and attempt to run the app, and are greeted with a message that the app has stopped. To demonstrate this problem, create a UseUtils project. Execute the following command (spread across two lines for readability) to create UseUtils: android create project -t 2 -p C:\prj\dev\UseUtils -a UseUtils -k ca.tutortutor.useutils This command creates a UseUtils project directory whose src subdirectory contains a ca subdirectory, which contains a tutortutor subdirectory, which contains a useutils subdirectory. Furthermore, useutils contains a skeletal UseUtils.java source file. Replace this file's contents with Listing 1. Listing 1Toasting startup by presenting a randomly generated integer. package ca.tutortutor.useutils; import android.app.Activity; import android.os.Bundle; import android.widget.Toast; import ca.tutortutor.utils.Utils; public class UseUtils extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); Toast.makeText(this, "Random integer [0, 255]: "+Utils.rnd(256), Toast.LENGTH_LONG).show(); } } Listing 1 declares a UseUtils activity class that imports a Utils class from package ca.tutortutor.utils. UseUtils calls this other class's rnd(int) class method to return a randomly generated integer ranging from 0 through 255. This integer is included in a message that's presented to the user via a toast (a message that's displayed for a short time). Listing 2 presents the contents of a Utils.java source file that declares the Utils class. Listing 2 2 into utils. - From the current directory, execute javac ca/tutortutor/utils/Utils.java to compile this file. - From the current directory, execute jar cf utils.jar ca/tutortutor/utils/*.class to create utils.jar. Copy utils.jar to the project's libs directory. Then, with UseUtils as the current directory, execute ant debug to build the APK. Switch to the bin subdirectory, and you should observe a UseUtils-debug.apk file among other files. Assuming that an emulated device for Android 4.1 is running, execute the following command to install this APK file (which is really a ZIP file with an .apk file extension) onto this emulated device: adb install UseUtils-debug.apk Switch to the device's app launcher screen and locate the UseUtils icon. Click this icon and you should observe Figure 1's error message. Why has Android closed the app? The following excerpt from the output generated while running Ant's build.xml file hints at the answer: [dx] trouble processing: [dx] bad class file magic (cafebabe) or version (0033.0000) [dx] ...while parsing ca/tutortutor/utils/Utils.class [dx] ...while processing ca/tutortutor/utils/Utils.class [dx] 1 warning Android's dx tool, which converts Java classfiles to their Android equivalent, silently excludes classes that target Java 7 (their classfiles have a major version number of 51 and a minor version number of 0) from the generated classes.dex file. This omission results in a thrown error, which can be viewed by executing adb logcat, whose output appears in Figure 2. The Dalvik virtual machine equivalent of Utils.class was not stored in classes.dex, and an attempt to reference this class (at startup) has resulted in a thrown instance of the Dalvik equivalent of the java.lang.NoClassDefFoundError class. You can further prove this cause to yourself by executing the following command from within the bin directory: dexdump classes.dex The classes.dex file contains Android's Dalvik equivalent of the project's compiled Java classfiles. The dexdump tool processes this file and generates a disassembly. Search this disassembly and you'll not find a reference to the Utils class, because this JDK 7-created class is not present. To fix this problem, include the -source and -target options when compiling the source code via javac, as follows: javac -source 1.5 -target 1.5 ca/tutortutor/utils/Utils.java javac -source 1.6 -target 1.6 ca/tutortutor/utils/Utils.java Execute either of these commands, and a Java 5- or Java 6-compatible classfile is generated. Furthermore, the dx tool will not exclude the Utils class from classes.dex. For proof, rebuild the project, reinstall UseUtils-debug.apk, and run the app. You should see a screen similar to that shown in Figure 3, with the toast near the bottom.
http://www.informit.com/articles/article.aspx?p=1966023&amp;seqNum=3
CC-MAIN-2016-40
refinedweb
832
51.04
Introduction Oracle MAF 2.3.1 has just been released. This release contains a major new feature, the client data model (CDM). CDM is the productized version of the A-Team Mobile Persistence Accelerator (AMPA). This article explains how you can migrate your existing MAF app that uses AMPA to MAF 2.3.1 with CDM. We recommend to perform this migration as soon as possible since Oracle A-Team will no longer maintain the AMPA open source project in GitHub. The migration steps are pretty straightforward and risk-free since the complete code base of AMPA has been integrated "as is" with MAF, the biggest change is the renaming of Java packages. Main Article If you are migrating from MAF 2.3.0, you need to decide whether you want to upgrade your existing JDeveloper 12.2.1 installation to MAF 2.3.1 or you prefer to install another fresh instance of JDeveloper 12.2.1 which allows you to run both MAF 2.3.0 and MAF 2.3.1 apps side by side. See the article How do I install 2 versions of the same version of JDeveloper for more info. If want to upgrade you need to perform the following steps after you installed the MAF 2.3.1 extension: If you do no want to upgrade, or you are coming from an older MAF version that required JDeveloper 12.1.3, you can start with a fresh install of JDeveloper 12.2.1 and install the MAF extension as documented here. You can then proceed with the migration steps: The next sections will explain all of these steps in detail. The last section will discuss the available CDM documentation. When you install the MAF 2.3.1 extension over the MAF 2.3.0 extension, the MAF JDK 1.8 Compact Profile 2 is not updated automatically. Since the CDM functionality is added to MAF through new jar files that are not automatically added to this profile, JDeveloper will not be able to find the CDM classes. This will cause compilation errors like package oracle.maf.api.cdm.persistence.model does not exist Note that you will also get these errors when you create a new app using CDM, not just when migrating an AMPA app. To fix this you need to upgrade the JDK profile as follows: The MAF JDK 1.8 Compact 2 Profile should now be re-added automatically with the correct jar files. Here is a screen shot of the correct profile definition: To test whether JDeveloper can find the CDM classes you can use Go to Java File option (Ctrl-Minus on Windows, Cmd-J on Mac) and enter InitDB in it. This should bring up the oracle.maf.impl.cdm.lifecycle.InitDbLifecyleListner class. To remove the AMPA extension from JDeveloper, you go to the Tools -> Features menu option. Then click on Installed Updates. Select the A-Team Mobile Persistence Accelerator and click the Uninstall button. This removes the AMPA extension registration and jar files. However, it does not remove the oracle.ateam.mobile.persistence folder in the jdeveloper/jdev/extensions folder. You can remove this folder manually. General migration steps are documented in the MAF developer's guide, section Migration your Application to MAF 2.3.1. That section also includes a paragraph about migrating AMPA apps, this blog article is a more comprehensive version of that paragraph, with more background info, and some additional steps. One general migration step is not documented there, the network access plugin has been renamed. If you open the maf-plugins.xml file, you will see that the network pluginId is now invalid: To fix this, you need to change the pluginId to maf-cordova-plugin-network-access. Or you can remove the network plugin line, go to the Overview tab of maf-application.xml and check the Network Access checkbox on the Plugins tab. Open the persistence-mapping.xml, located in META-INF directory of your ApplicationController project and change the namespace to. All non-deprecated AMPA classes have been included in MAF 2.3.1. MAF makes a distinction between public classes and internal implementation classes. Public classes are included in a package name starting with oracle.maf.api. Implementation classes are included in a package name starting with oracle.maf.impl. The signature of public classes is guaranteed to ensure upwards compatibility, implementation classes might change over time and might break your custom code if you use these classes and then migrate to a newer MAF version. This is why Oracle recommends to only use public MAF framework classes in your custom code. For this first release of CDM, the AMPA code has been included "as is" but over time the code base will be improved to support new features. To keep flexibility in improving and refactoring the code over time, a number of the AMPA classes have been moved into implementation packages starting with oracle.maf.impl.cdm. While Oracle generally recommends to avoid use of implemntation classes in your custom code, it is fine and even inevitable to do so with some of the CDM implementation classes. For example, all your service classes will now extend from oracle.maf.impl.cdm.persistence.service.EntityCRUDService. With this explanation, we are ready to rename the AMPA java packages. The table below lists the global search and replace actions you should perform in order on all files in your application. You can perform these global search and replace actions in JDeveloper by navigating to the Search -> Replace in Files option. Make sure you set the Scope field to your entire application, not just to one of the projects. If you now try to compile your application, you still might get a few compilation errors. This is because classes in the same AMPA package might have been divided over both the implementation and public CDM packages. Easiest way to fix these errors is to double-click on the error to jump to the Java class, and rename the .api. part of the import to .impl. or vice versa. Alternatively, you can remove the invalid import statement and let JDeveloper automatically suggest the correct import. There is one remaining change you have to make manually in maf-application.xml because this file is in a directory that is not scanned for global search and replace actions, you need to change the Lifecycle Event Listener property from oracle.ateam.sample.mobile.lifecycle.InitDBLifeCycleListener to oracle.maf.impl.cdm.lifecycle.InitDBLifeCycleListener. If you are using a custom lifecycle listener that extends InitDBLifecyleListener, you don't have to do anything because your custom class is already updated to point to the CDM package. If your application compiles successfully, you can do a final check by doing a global search in your application on the string oracle.ateam.sample which should no longer return any hits. AMPA comes with some standard EL expressions around background tasks and pending data sync actions. To update these expressions for CDM you should perform the following global search and replace actions: The last entry in this table is applicable when you are migrating from an earlier AMPA release, not the latest 12.2.1.0.68 release. In previous releases, the data synchronization happened in the context of an Entity CRUD service, it only synchronized the data object of the entity CRUD service and its child data objects (if applicable). Therefore the expression to check whether an entity (data object) had pending data sync actions included the entity name as prefix instead of the general ampa_ prefix used in latest AMPA release. AMPA shipped with two reusable feature archives to inspect web service calls and to view pending synchronization actions. While the web service calls feature is not yet documented in the CDM chapter, both feature archives are included with CDM. If you run the MAF User Interface Generator they wil be automatically added to your application, just like AMPA did. If your existing AMPA application is using one or both of these features, you need to do two things: To change the library reference, go to Application -> Application Properties menu option, and click on the Libraries and Classpath option at the left. Remove both jar files, and click the Add Jar/Library button. Navigate to the directory where the CDM feature archives can be found, which is jdeveloper/jdev/extensions/oracle.maf/FARs/CDM. Select both jar files and click Open button to add them to the application. Now, go to the Overview tab of maf-application.xml and update the invalid feature references oracle.ateam.sample.mobile.datasynch and oracle.ateam.sample.mobile.wscalls with the new CDM id's of these features. Unfortunately, during the integration of AMPA to CDM a minor code change has introduced an issue which currently prevents you from encrypting the SQLite database. This will be fixed with the next MAF release. For now, you should disable DB encryption by adding the following line to the mobile-persistence-config.properties file, located in the META-INF directory of tour ApplicationController project: db.encryption=false If you don't have this entry, or you change the value to true, your application will hang at application start up and the log will display the following error: CDM Database key is null The CDM chapter in the MAF developer's guide currently contains a subset of the information that you can find in the AMPA Developer's guide. Sections which refer to AMPA classes that ended up in the implementation packages (see previous section Rename Java Packages for more info) have not been include yet. In subsequent MAF releases the remaining documentation topics will be added once final decisions have been made about the location of all CDM classes and methods For now, you can continue to consult the AMPA Developer's guide for those topics, because all of the content is still valid. Most notably, you might want to check out the following sections:
https://www.ateam-oracle.com/migrating-ampa-apps-to-oracle-maf-2-3-1-client-data-model
CC-MAIN-2020-24
refinedweb
1,669
55.54
#include <stdio.h> #include <stdlib.h> int main(){ unsigned long n,count=0,*array,size=1,i; while((scanf("%ul",&n))!=EOF){ count=count+1; array =(long*)malloc(count*sizeof(long)); array[count] = n; if (count>size){ size=count; } if (n==5){ break; } } for (i=1;i<=size;i++){ printf("%ul\n",array[i]); } free(array); return 0; } my first problem is that, when the app takes the forth value for variable un it just sticks, and does nothing. also when i try for 3 numbers, it doesnt print them in the end correctly it prints numbers like 2938928329 etc i would like some help to achieve that, dont tell me other ways, i know there are other ways. thanks in advance :)
https://www.daniweb.com/programming/software-development/threads/195096/unknown-array-size-dynamic-memory-allocation
CC-MAIN-2018-17
refinedweb
122
61.46
proc dict'sort {dict args} { set res {} foreach key [lsort {*}$args [dict keys $dict]] { dict set res $key [dict get $dict $key] } set res } #-- Test: set d1 {foo 1 bar 2 grill 3} puts 1:[dict'sort $d1] ;# 1:bar 2 foo 1 grill 3 puts 2:[dict'sort $d1 -decreasing] ;# 2:grill 3 foo 1 bar 2 This works me correctly for Tcl 8.4 version posted by kruzalex sugar::proc dict'sort {dict args} { set res {} foreach key [lsort {*}$args [dict keys $dict]] { lappend a() $key [dict get $dict $key] } set res [lindex [array get a] 1] }or without sugar::proc proc dict'sort {dict args} { set res {} foreach key [eval [list lsort] [lrange $args 0 end] [list [dict keys $dict]]] { lappend a() $key [dict get $dict $key] } set res [lindex [array get a] 1] } Just like the results of [array get], so you can [array set X [dict filter]] or (conversely) [dict get [array get X] key].So, you can define a proc fred {args} and then immediately treat $args as a dict, if (and only if) the values passed have the form of a dict - no special processing is required (rather, the shimmering occurs in the background.This is useful for passing named arguments to a proc, sort of like the various options packages: [dict get $args -option] will fetch any value passed as -option value.[dict with] alters the enclosing scopeSo if you have a dict X, [dict with X {}] will construct and initialize variables with the same names and values as X's contents.This is useful for passing around collections of named values.You could use it to populate the variables in a namespace (for, say, a collection of defaults) [namespace eval dict with $dv {}] JMN 2008-06-20 It appears that you can extend a dict using lappend. For the case of a loop where you know the newly added keys are not currently in the dict - might this be faster than using dict set? e.g foreach val $newValues { lappend mydict [uuid::uuid generate] $val }or lappend mydict {*}$newPairsIt also seems that even if you do lappend a key that is already in the dict, the [dict get], [dict size] etc methods still do the sensible thing, and use the latest entry in the list for a particular key. After this, upon using [dict set] - the earlier duplicate key-value pairs are automatically removed anyway.I guess there might be some sort of shimmering in using list methods on the dict, but presumably in the above case the lappend would still be a win for large datasets because the existence of the key doesn't need to be checked each time a new value is added. Perhaps this gain is lost anyway once the dict is converted back to a proper dict value.I've not had a chance to test the relative performance of this yet... so don't consider it as a tip/trick til you've verified it helps for your particular case!In particular - it might be worth comparing the above with: set mydict [dict merge $mydict $newPairs]update: A few rough tests indicate that the lappend method is actually slower. The foreach loop does indeed run faster using [lappend], than [dict set] - but this time (and more!) is lost during the subsequent access of the value as a dict using [dict size $mydict]For Tcl8.6a0 at least - it would seem the moral is, if you're going to be using it as a dict, just build it as a dict using the dict methods. Test if variable is a dict HaO 2010-06-28 I would like a dict subcommand which checks if a variable is a dict similar to array exists. The pdict example uses: if { [catch {dict keys ${d}}] } { error "error: pdict - argument is not a dict" }which is ok but might pollute the error log as a side effect. Is there a more elegant solution ?AMG: You can use [string is list] plus a test for even [llength]. if {![string is list $d] || ([llength $d] & 1)} { error "not a dict" }APN That will cause shimmering though.CMcC FWIW, I have used if {![catch {dict size $d}]} {...} to test for dictness. gasty Simple proc to check if a key exists in a dictionary and return their value: proc getDictItem {dictVal keyVar} { upvar $keyVar keyVal if {[dict exist $dictVal $keyVar]} { set keyVal [dict get $dictVal $keyVar] return 1 } return 0 } # demo set d [dict create a 1 b 2 c 3] puts "dict value = $d" if {[getDictItem $d "a"]} { puts "key 'a' exists in dict. a=$a" } if {![getDictItem $d "z"]} { puts "key 'z' not exists in dict." } Canonical dicts HaO 2011-05-04 On clt , the question was asked how to transform a list in a canonical dict (e.g. remove duplicate keys).The following list interpreted as a key-value list has two times key a and once b and thus is not a canonical dict: % set l {a 1 b 2 a 3}Methods to transform the list in a canonical dict: % dict create {*}$l % dict merge $l $l a 3 b 2Dict functions which do not return canonical dicts in the following cases: dict replace $l dict merge $1 a 1 b 2 a 3To resume, no method was found to directly transform a list in a canonical dict. There is always a small "derivation".Within the thread, it was proposed to define dict replace $l as such a function, which is quite similar to lrange $l 0 end, which forms a canonical list.Functions, which do a canonicalization: % dict for {k v} $l {puts -nonewline "$k $v "} ; puts "" a 3 b 2 % dict size $l 2AMG: Actually, this is incredibly easy to do. Just call [dict get $dictValue] with no additional arguments. % dict get {a 1 b 2 a 3} a 3 b 2 Taken from a posting on comp.lang.tcl namespace eval DictUnsupported { package require Tcl 8.6 ######################### ## dict format dict # # convert dictionary value dict into string # hereby insert newlines and spaces to make # a nicely formatted ascii output # The output is a valid dict and can be read/used # just like the original dict ############################# proc dict_format {dict} { dictformat_rec $dict "" "\t" } proc isdict {v} { string match "value is a dict *" [::tcl::unsupported::representation $v] } ## helper function - do the real work recursively # use accumulator for indentation proc dictformat_rec {dict indent indentstring} { # unpack this dimension dict for {key value} $dict { if {[isdict $value]} { append result "$indent[list $key]\n$indent\{\n" append result "[dictformat_rec $value "$indentstring$indent" $indentstring]\n" append result "$indent\}\n" } else { append result "$indent[list $key] [list $value]\n" } } return $result } namespace ensemble configure dict -map \ [linsert [namespace ensemble configure dict -map] end format [namespace current]::dict_format] } See also edit - pdict: Pretty print a dict. - eDictor, a visual editor for big dicts used as data files - a merge routine that handles nested dicts: [1] - nested dict diff routines: [2] - You may also find interest in dict extensions
http://wiki.tcl.tk/17680
CC-MAIN-2016-50
refinedweb
1,171
59.57
From: Stefan Seefeld (seefeld_at_[hidden]) Date: 2005-11-01 09:13:18 Alan, thank you for your interesting points. The API I suggest is not modeled after the the W3C DOM IDL, neither its java implementation. Many people have expressed discomfort both with the W3C DOM API as well as the idea of simply transcribing the java API to C++. Therefor, the API I suggest here is (so I hope) as C++-like as it can be, while still giving full flexibility to operate on (i.e. inspect as well as modify) XML documents. From the little I could gather about the alternatives you mention, it sounds like they would make very nice access layers on top of the base API (axis-oriented iterators, say). > I'd suggest, in any language wide implementation of XML, to > attempt to separate transformation and query, from update. They > are two very different applications. I'm not sure I understand what you mean by transformation. How is it different from update ? Or is the former simply a (coarse-grained) special case of the latter, using a particular language to express the mapping (such as xslt) ? > I'd suggest starting with supporting XML documents that conform > to the XPath and Query data model, and working backwards as the > need arises. It makes for a much more consice library, and > removes a lot of methods for rarely needed, often pathalogical, > mutations. There are clearly very different use cases to be considered. We should collect them and try to make sure that all of them can be expressed in a concise way. I'm not sure all of them operate on the same API layer. The code I posted supports xpath queries. While the result of an xpath query can have different types, right now only node-sets are supported (May be boost::variant would be good to describe all of the possible types). I'm not quite sure I understand what you mean by 'XPath data model'. > Implementing an object model would be much easier, if you > ipmlement the 95% that is most frequently used. And if you > sepearate the compexity of document mutation from the realative > simplicity of iteration and transformation. Could you show an example of both, what you consider (overly) complex as well as simple ? While the API in my code is certainly not complete (namespaces are missing, notably), I find it quite simple and intuitive. I don't think it needs to become more much complex to be complete. In particular, I'm hoping that we can make the API modular, so document access and document validation are kept separate (for example). May be that is what you mean, I'm not sure. Regards, Stefan Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2005/11/96138.php
CC-MAIN-2020-29
refinedweb
475
64.71
Overload 4 hit my desk shortly after I returned from San Diego and the most recent meeting of X3J16/WG21. San Diego’s a beautiful city and this was the first occasion that I had taken time off around a committee meeting to explore. I hope we get back there before we’ve finished the standardisation process. The schedule for future meetings is gradually being laid out and the following international members will be hosting meetings in the next few years: Canada (July ‘94), Japan (November ‘95), Sweden (July ‘96), UK (July ‘97). In addition, there are two meetings a year in North America – well, the U.S. folks don’t travel too well, you see… The UK meeting will be hosted by my company, and by that time we should be fairly close to an international standard. All the same, it would be nice to see as large a UK contingent present as possible. What happened in San Diego? The hot topic was the schedule – not just the planned meetings, but the more important issue of how quickly we can produce an International Standard for C++. The UK Position Prior to the meeting, the UK panel had been reviewing the working paper and had come to the conclusion that it really was a long way off being acceptable as an IS. Accordingly, we took the position that we wanted to slip the schedule by two meetings, or eight months. By the time myself and Steve Rumsby arrived in San Diego, we had established, informally at least, that the UK position was supported by Germany, Japan, Australia and New Zealand. That meant a majority of the international members supported a slippage. However, we sort of didn’t get it… The “missing” Ballot The schedule prior to San Diego said we would be voting in July ‘94 on whether to advance the working paper into the “CD Registration Ballot”. This ballot is conducted by SC22 – an international committee for which WG21 is an advisory working group. From there the document would go on to “Draft International Standard” and finally “International Standard”, assuming it was successful at each ballot. Sam Harbison, convenor of WG21, informed us that there was in fact an extra ballot involved between “CD Registration” and “DIS”. Adding this to the schedule pushed the final date out by about eight months. There ensued a great deal of confusion about exactly what each ballot meant and what the criteria for acceptance at each stage should be. In the end, the “pro-slip” group agreed to delay a decision on whether or not to slip until the July meeting. Our main opponent was the U.S. who seems indecently keen to have a standard now and damn the quality. The July meeting will be interesting and will be held, somewhat prophetically, in Waterloo (in Canada!). If all goes well According to the schedule produced at the end of the San Diego meeting (which was not, perhaps significantly, endorsed by the majority of WG21 members), we should publish our International Standard for C++ early in 1997. So why did I say that the July ‘97 meeting will be in the UK? Well, no-one really believes that the document will go through each and every one of the three ballots first time. Even if it did, we would still be answering public interpretation requests for years to come. Language changes Apart from the schedule, it was business as usual for most of the working groups. Core continued to wrestle with linkage, references and lvalues. Library continued wrestling with iostream and began to get to grips with exceptions. Extensions wrestled with exceptions and templates. I’ve broken it Or rather, we’ve finally fixed it: the scope of variables declared in a for loop, that is. You may not thank the committee for it now, but we broke this: for(int i = 0; i < LIMIT; ++i) { if (key == table[i]) { break; } } if (i == LIMIT) { // didn't find the key item } We have made i go out of scope at the end of the for loop. A lot of code does this, but it is very easy to fix: int i = 0; for ( ; i < LIMIT; ++i) This restores the original meaning. Why did we change it? A lot of people new to C+ + write something like this and wonder why it doesn’t work: for(int i=O;i < LIMIT;++i) ... for(int i=O;i < LIMIT;++i) ... The second for loop causes a “duplication definition” compilation error. Furthermore, there’s a quality issue involved: if you really want to ask whether or not an item is present in the table, shouldn’t you choose a better way than testing the loop variable? bool missing = true; for(int i=0; i < LIMIT; ++i) { if (key == table[i]) { missing = false; break; } } if (missing) { // didn't find the key item } Or if you want to work on the found item: Item* item = 0; for(int i = O; i LIMIT; ++i) { if (key == table[i]) { item = &table[i]; break; } } if (item) { // work on item- } There was another reason – does the following work on your Borland compiler? for (int i = 0; int j = (i < LIMIT); ++i) { // ... } i = j; While i stays in scope, j should go out of scope. This behaviour is a consequence of the decision to introduce run-time type identification in March ‘93 and allow declarations in the condition of if, switch, while and for. Member constants Hopefully, this decision will be more to your liking. Isn’t it annoying that you cannot have typed constants inside a class like you can everywhere else? static const int size=42; class X { public: static const int mySize = 128; // illegal private: char buffer[mySize]; }; You had to muck about with enum and it didn’t always work. Well, we voted to allow the above example. A static integral data member may now have an initialiser inside the class. You still have to provide the static data member definition outside the class somewhere, and that cannot have an initialiser now, but that will probably get changed before we’re done – let us know what you think. Templates We resolved another long list of minor template issues, carrying on the work started in San Jose. We also added some extensions to make templates even more useful. We added the ability to pass templates as arguments to other templates. This allows you to write container class templates and then write a container encapsulation class template for which you can specify different types of containers. An example would probably make this clearer: template<class T> class list; template<class T> class dyn_array; template<class K,class V,template<class T> class C> class map { C keys; // a C container of K keys C values; // a C container of V values // ... }; map<g,string,list> mapl; map<g,string,dyn_array> map2; This allows more control over the mechanisms used by template classes and will become more important as the use of templates matures. We also decided to allow some conversions to take place for arguments to template functions. Effectively, this will allow the compiler to perform trivial conversions in order to match a template function. For example: template<Class T> void funcl(T* p) { // don't know whether T // is const or not! } template<Class T> void func2(const T* p) { // we know T is const // now, but we cannot // call func2 with, for // example, a 'char*' // argument - it wouldn't // match the const-ness // of the parameter } This problem is now solved: you can declare func2 as above and you will be able to call it with a char* argument – the compiler will perform the ‘trivial’ conversion from char* to const char*. The final extension we added was to allow member functions to be templates. This solves a problem with writing a safe pointer template class. Consider the following code: class base { ... }; class derived : public base { }; derived* dpl = ... ; base* bpl = dpl; // conversion 'derived*' = 'base*' template<Class T> class ptr { T* p; public: ptr(const ptr & pp) : p(pp.p) [ } // ... }; ptr<derived> dp2 = ... ; ptr<base> bp2 = dp2; // illegal! The problem is that there is no relationship between ptr<base> and ptr<derived> – they are just two different classes. Ideally, you want a constructor for ptr that takes something other than a T* but only those types that can be converted to T*. Member templates provide the solution like this: template<class T> class ptr { T* p; public: template<class U> ptr(const ptr& pp) : p(pp.p) { } // ... } ; ptr<base> bp2 = dp2; // valid: T == base, // U == derived The initialisation p(pp.p) will be valid only for U* that can be converted to T*, so the ptr constructor allows ‘normal’ pointer conversions. We discussed a few more extensions for templates that will probably come up for a vote in Waterloo. These include template typedefs, te mplate name spaces and namespaces as template arguments. These would make templates more orthogonal by removing restrictions on what can be a template. Core Language changes It was decided to disallow const and volatile qualifiers on the top level of a reference declaration – the following is now ill-formed: int i ; int& const ir = i; No one seemed to know what such a declaration should mean, so now you can’t write it! It was decided that the left hand side of . or -> is evaluated, even when the right hand side is a static member. Previously the left hand side was only evaluated if the right hand side was a non-static member. class example { public: static void sf(); void f( ) ; } ; example* g(); g()->f(); // always calls g() g()->sf(); // previously, did not call g(),now it does This makes the language a bit more intuitive – it certainly needs it! Library issues Some minor changes were made to the proposed use of namespaces and exceptions within the library. Quite a few ‘editorial’ changes were agreed for the library section of the draft standard. The main topic of interest for the library was “The Standard Template Library” which was described by its author, Alex Stepanov of Hewlett-Packard. Although it was not discussed in committee – lack of time – there was a great deal of interest in this work as it would fill some of the holes in the library. STL contains various container classes, such as list, and associated iterators which are missing from the current draft standard. It also provides a coherent framework for template classes which the draft standard library lacks. Hopefully, there will be a formal proposal to include this into the draft standard at the Waterloo meeting. In the meantime, I have signed up, with several others, to work with STL and report on what I find. The next Casting Vote column will tell you what happened in Waterloo in July 1994. Sean A. Corfield I can be contacted by e-mail: Sean.Corfield@prqa.co.uk
https://accu.org/index.php/journals/1770
CC-MAIN-2019-47
refinedweb
1,822
59.94
As will probably be painfully obvious, I do not normally program in C. However, after a lot of searching it seems to be the most efficient language to implement what I want to do, namely generate a large number (10 million) of random integers quickly (using an Xorshift* algorithm). The intention is to seed it with /dev/urandom. I have cobbled together the code below from a variety of sources. It now compiles without warning or error with gcc 5.4. However, when run, the program is not pulling a random seed from /dev/urandom (or at least it's not populating the value of "seed" correctly). In fact, the value of "seed" seems to always end up the same on my system - 4195584. In summary, the variation of Xorshift* I cribbed from here requires a uint64_t seed, which I want to pull from /dev/urandom, which is not working currently in my program as it stands. Environment is Linux (Ubuntu 16.04). Assistance appreciated. #include <stdio.h> #include <stdint.h> #include <stdlib.h> #include <fcntl.h> #include <unistd.h> #include <inttypes.h> int main (void) { uint64_t seed; uint64_t newval; long int a; int fd = open("/dev/urandom", O_RDONLY); read(fd,(uint64_t*)seed,sizeof(seed)); close(fd); printf("Seed:"); printf("%" PRIu64 "\n",seed); for ( a = 1; a <= 10000000; a = a + 1 ) { seed ^= seed << 12; seed ^= seed >> 25; seed ^= seed << 27; seed = seed * UINT64_C(2685821657736338717); newval = abs(seed); printf("%" PRIu64 "\n",newval); } } That is because you need to pass the address of the object to read: read( fd , &seed , sizeof(seed)); ^^^^^
https://codedump.io/share/QsO3jswFNXcX/1/c---how-to-read-data-from-devurandom-into-a-uint64t-variable
CC-MAIN-2016-44
refinedweb
260
64.1
C-Style Binary I/O read_size = std::fread(data_ptr, 1, size, file); - read_size - Size of the data that was read. If this is less than size, an end-of-file or error occurred. - data_ptr - Pointer to a buffer to receive the data being read. - 1 - The constant 1. (For the reason behind this constant, see the sidebar.) - size - Number of bytes to be read. - file - Input file. For example: struct { int width; int height; } rectangle; if (std::fread(<static_cast<char *>&rectangle, 1, sizeof(rectangle), in_file) != sizeof(rectangle)) { std::fprintf(stderr, "Unable to read rectangle\n"); exit (8); } In this example you are reading in the structure rectangle. The & operator makes the structure into a pointer. The cast static_cast<char *> turns &rectangle into the proper parameter type, and the sizeof operator is used to determine how many bytes to read in as well as to check that the read was successful. std::fwrite has a calling sequence similar to std::fread: write_size = std::fwrite(data_ptr, 1, size, file); No matter what filename you give Example 16-9, std::fopen can't find it. Why? Example 16-9: fun-file/fun-file.cpp #include <cstdio> #include <cstdlib> int main( ) { char name[100]; /* name of the file to use */ std::FILE *in_file; /* file for input */ std::printf("Name? "); std::fgets(name, sizeof(name), stdin); in_file = std::fopen(name, "r"); if (in_file == NULL) { std::fprintf(stderr, "Could not open file\n"); exit(8); } std::printf("File found\n"); std::fclose(in_file); return (0); } C- Versus C++- Style I/O Both C- and C++- style I/O have their own features and quirks. In this section we'll discuss some of the differences between these two systems. Simplicity Let's say we want to write a simple checkbook program. We need to print an account statement. We need some code to print each line of the account statement (date, check number, payee, and amount). In C the print statement looks like: std::printf("%2d/%2d/%02d %4d: %-40s %f6.2\n", check.date.month, check.date.day, check.date.year, check.number, check.payee, check.amount); In C++ the print statement is: std::cout << setw(2) << check.date.month << '/' << setw(2) << check.date.day << '/' << setw(2) << setfill('0') << check.date.year << ' ' << setw(4) << check.number << ':' << setw(40) << setiosflags(std::ios::left) << check.payee << resetiosflags(std::ios::left) << ' ' << setw(6) << setprecision(2) << setiosflags(std::ios::fixed) << check.amount << setw(0) << '\n'; From this example we can clearly see that the C-style I/O is more compact. It is not clear that compact is better. This author prefers the compact style of the C std::printf functions, while many others prefer the verbosity of the C++ I/O system. Besides if you're C++ programmers, you probably should program in C++ and not bring legacy I/O systems into the mix. Although it looks like C is more compact, things are not as obvious as they look. A well-designed date class would have its own output operator. Thus we can simplify our C++ code down to: std::cout << check.date << setw(4) << check.number << ':' << setw(40) << setiosflags(std::ios::left) << check.payee << resetiosflags(std::ios::left) << ' ' << setw(6) << setprecision(2) << setiosflags(std::ios::fixed) << check.amount << setw(0) << '\n'; But this assumes that only the date has an output operator. If we designed our check class correctly, it should have one as well. This means that our code now has been simplified down to: std::cout << check << '\n'; Now this doesn't mean that complexity has gone away. It's merely been moved from outside the class to inside it. This example serves to illustrate one of the key differences between C and C++. In C-style I/O, the information on how to manipulate the data (in this case, how to print it) is contained outside the data itself. In C++ it's possible to put the manipulation code and the data into a single class. If we are writing out our checkbook information in only one place, the C version may be simpler and easier to work with. So for simple programs, you may want to consider using C-style I/O. But suppose that we wanted to print out the data to a number of places. If we used C-style I/O, we would have to replicate our format code all over the place or create a small function to do the printing. With C++'s classes, we can keep the printing information in one logical place. (As a person who's just had to rewrite all the C-style format statements in a rather large piece of code, I can tell you that putting the formatting information in one place, the object, has some advantages.) Page: 1 2 3 4 5 6 7 8 9 Next There are no comments yet. Be the first to comment!
http://www.codeguru.com/columns/chapters/article.php/c6667/CStyle-Binary-IO.htm
CC-MAIN-2015-40
refinedweb
816
66.33
d'oh, I fell for the "reply-all" trick. :) Here's the code in question: Apparently there are built in functions hex() and oct() to generate hexadecimal and octal digit strings of numbers, but there's no corresponding bin(). Kind of a bizarre oversight, if you ask me. Searching on the internet, I found this: def bin(integer, returnType=str): bin = {'0':'000','1':'001','2':'010','3':'011','4':'100','5':'101','6':'110',' 7':'111'} if returnType == int: return int(''.join([bin[i] for i in oct(integer)])) elif returnType == long: return long(''.join([bin[i] for i in oct(integer)]),10) else: return (''.join([bin[i] for i in oct(integer)])).lstrip("0") Just define this in the program you are writing and use bin as you would use oct or hex, making sure to specify int or long as the return type if you don't want a str. On Feb 21, 2007, at 1:18 AM, Dick Moores wrote: > Well, I can't compare mine with yours (where is it?), but using the > template in timeit.py: > > def inner(_it, _timer): > from decToBin import computeBin > _t0 = _timer() > for _i in _it: > computeBin(12345678901234567890) > _t1 = _timer() > return _t1 - _t0 > """ > I get these results: > > for computeBin(12345678901234567890) > 1000 loops, best of 3: 448 usec per loop > > for computeBin(1234567890) > 10000 loops, best of 3: 59.7 usec per loop > > for computeBin(12345) > 10000 loops, best of 3: 35.2 usec per loop > > Dick Moores > > At 09:04 PM 2/20/2007, David Perlman wrote: >> I >> > >> >> -- >> -dave---------------------------------------------------------------- >> After all, it is not *that* inexpressible. >> -H.H. The Dalai Lama > > -- -dave---------------------------------------------------------------- After all, it is not *that* inexpressible. -H.H. The Dalai Lama
https://mail.python.org/pipermail/tutor/2007-February/052856.html
CC-MAIN-2016-44
refinedweb
284
72.56
[No idea if I replied to this, but I'm doing it now.] Hopefully its useful. It is, I have applied it to my GNU Mach NIC update patch, it should be avaiable at in a while (hours more like it). Were these all the PCI ID's for eepro100.c in Linux 2.4.23? If not, then could you submit a patch that adds the missing ones? --- eepro100.orig.c 2004-08-05 09:31:10.000000000 +0700 +++ eepro100.c 2004-08-05 09:40:41.000000000 +0700 @@ -43,6 +43,12 @@ Changed command completion time and added debug info as to which CMD timed out. Problem reported by: "Ulrich Windl" <Ulrich.Windl@rz.uni-regensburg.de> + + +HURD NOTES: + 2004-08-05 Arief M Utama <arief_mulya@yahoo.com> + * Add extra pci_tbl information to make it work on my T30 + information taken from linux-2.4.23 eepro100.c */ Make it a real ChangeLog entry next time, and please follow GCS conventions. @@ -681,8 +775,8 @@ #endif pcibios_read_config_byte(pci_bus, pci_device_fn, PCI_INTERRUPT_LINE, &pci_irq_line); - pciaddr = pci_ioaddr; - irq = pci_irq_line; + pciaddr = 0x8000; + irq = 0x0b; pdev->irq = irq; #endif } Removed this bit.
https://lists.debian.org/debian-hurd/2004/10/msg00037.html
CC-MAIN-2014-15
refinedweb
188
69.68
A Basic Branch and Bound Solver in Python using Cvxpy Branch: $ \max v \cdot x$ s.t. $ \sum_i s_i x_i<= capacity $, $ x \in {0,1}$. The basic heuristic I’m using is to branch on variables that are either 0 or 1 in even the relaxed solution. The alternative branch hopefully gets pruned fast. import cvxpy as cvx import copy from heapq import * import numpy as np import itertools counter = itertools.count() class BBTreeNode(): def __init__(self, vars = set(), constraints = [], objective=0, bool_vars=set()): self.vars = vars self.constraints = constraints self.objective = objective self.bool_vars = bool_vars self.children = [] def buildProblem(self): prob = cvx.Problem(cvx.Minimize(self.objective), self.constraints) #i put Minimize, just so you know that I'm assuming it return prob def is_integral(self): return all([abs(v.value - 1) <= 1e-3 or abs(v.value - 0) <= 1e-3 for v in self.bool_vars]) def branch(self): children = [] for b in [0,1]: n1 = copy.deepcopy(self) #yeesh. Not good performance wise, but is simple implementation-wise v = n1.heuristic() #dangerous what if they don't do the same one? I need to do it here though because I need access to copied v. n1.constraints.append( v == b ) # add in the new binary constraint n1.children = [] n1.bool_vars.remove(v) #remove binary constraint from bool var set n1.vars.add(v) #and add it into var set for later inspection of answer #self.children.append(n1) # eventually I might want to keep around the entire search tree. I messed this up though children.append(n1) return children def heuristic(self): # a basic heuristic of taking the ones it seems pretty sure about return min([(min(1 - v.value, v.value) , i, v) for i, v in enumerate(self.bool_vars)])[2] def bbsolve(self): root = self res = root.buildProblem().solve() heap = [(res, next(counter), root)] bestres = 1e20 # a big arbitrary initial best objective value bestnode = root # initialize bestnode to the root print(heap) nodecount = 0 while len(heap) > 0: nodecount += 1 # for statistics print("Heap Size: ", len(heap)) _, _, node = heappop(heap) prob = node.buildProblem() res = prob.solve() print("Result: ", res) if prob.status not in ["infeasible", "unbounded"]: if res > bestres - 1e-3: #even the relaxed problem sucks. forget about this branch then print("Relaxed Problem Stinks. Killing this branch.") pass elif node.is_integral(): #if a valid solution then this is the new best print("New Best Integral solution.") bestres = res bestnode = node else: #otherwise, we're unsure if this branch holds promise. Maybe it can't actually achieve this lower bound. So branch into it new_nodes = node.branch() for new_node in new_nodes: heappush(heap, (res, next(counter), new_node ) ) # using counter to avoid possible comparisons between nodes. It tie breaks print("Nodes searched: ", nodecount) return bestres, bestnode # a simple knapsack problem. we'll want to minimize the total cost of having each of these items, with different sizes. # Use a random problem instance N = 20 prices = -np.random.rand(N) sizes = np.random.rand(N) print(prices) x = cvx.Variable(N) constraints = [] constraints += [x <= 1, 0 <= x] #The relaxation of the binary variable constraint constraints += [sizes*x <= 5] # total size of knapsack is 5 objective = prices * x bool_vars = {x[i] for i in range(N)} root = BBTreeNode(constraints = constraints, objective= objective, bool_vars = bool_vars) res, sol = root.bbsolve() print(sorted(list([(v.name(), v.value) for v in sol.bool_vars] + [(v.name(), v.value) for v in sol.vars] ) )) # For comparison let's do the same problem using a built in mixed integer solver. x = cvx.Variable(N, boolean=True) constraints = [] constraints += [x <= 1, 0 <= x] constraints += [sizes*x <= 5] objective = prices * x prob = cvx.Problem(cvx.Minimize(objective),constraints) prob.solve() print(x.value). Nodes searched: 67 [('var0[0]', 0.9999999958228145), ('var0[10]', -1.2718338055950193e-08), ('var0[11]', -1.3726395012104872e-08), ('var0[12]', 0.9999999982326986), ('var0[13]', 0.9999999973744331), ('var0[14]', 0.9999999988156902), ('var0[15]', -1.1908085711772973e-08), ('var0[16]', 0.9999999903780872), ('var0[17]', 0.9999999863334883), ('var0[18]', -1.1481655920777931e-08), ('var0[19]', 0.9999999996667646), ('var0[1]', 0.9999999969549299), ('var0[2]', 0.9999999979596141), ('var0[3]', -9.282428548104736e-09), ('var0[4]', -1.1378022795740783e-08), ('var0[5]', 0.9999999868240312), ('var0[6]', 0.9999999995068807), ('var0[7]', 0.9999999995399617), ('var0[8]', 0.9999999859520627), ('var0[9]', 0.9999999948062767)] [ 1.00000000e+00 1.00000000e+00 1.00000000e+00 -1.44435650e-12 -1.88491321e-12 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 -7.11338729e-13 1.99240081e-13 1.00000000e+00 1.00000000e+00 1.00000000e+00 -1.48697107e-12 1.00000000e+00 1.00000000e+00 -1.75111698e-12 1.00000000e+00]. #rough sketch b = cvx.Variable(N) u = cvx.Parameter(N) u.value = np.ones(N) l = cvx.Parameter(N) l.value = np.zeros(N) constraints += [b <= u, l <= b] # change l.value and u.value in search loop.
https://www.philipzucker.com/a-basic-branch-and-bound-solver-in-python-using-cvxpy/
CC-MAIN-2021-39
refinedweb
805
63.46
Solving problem is about exposing yourself to as many situations as possible like What are “named tuples” are “named tuples” in Python?, which can be followed any time. Take easy to follow this discuss. Reading the changes in Python 3.1, I found something… unexpected: The sys.version_info tuple is now a named tuple: I never heard about named tuples before, and I thought elements could either be indexed by numbers (like in tuples and lists) or by keys (like in dicts). I never expected they could be indexed both ways. Thus, my questions are: - What are named tuples? - How to use them? - Why/when should I use named tuples instead of normal tuples? - Why/when should I use normal tuples instead of named tuples? - Is there any kind of “named list” (a mutable version of the named tuple)? Answer #1: line_length = sqrt((pt1[0]-pt2[0])**2 + (pt1[1]-pt2[1])**2) Using a named tuple it becomes more readable: from collections import namedtuple Point = namedtuple('Point', 'x y') pt1 = Point(1.0, 5.0) pt2 = Point(2.5, 1.5) from math import sqrt line line_length = sqrt((pt1[0]-pt2[0])**2 + (pt1[1]-pt2[1])**2) # use tuple unpacking x Attribute print. Answer #2: print row #Prints: Row(a=1, b=2, c=3) print row.a #Prints: 1 print row[0] #Prints: 1 row = Row._make([2, 3, 4]) #Make a namedtuple from a list of values print row #Prints: Row(a=2, b=3, c=4) Answer class 1 ant.bar 'bar' ant.baz.append('anything') ant.baz ['anything']ant_tuple Student class Student('Lisa', 'Simpson', 'A') first, last, grade = student first 'Lisa' last 'Simpson' grade 'A' student[0] 'Lisa' student[2] 'A' len(student) 3 'Lisa' in student True 'Bart' in student False student.first = 'Bart' for i in student: print(i) Bart Simpson A Answer #4: namedtuples are a great feature, they are perfect container for data. When you have to “store” data you would use tuples or dictionaries, like: user = dict(name="John", age=20) or: user = ("John", 20) The dictionary approach is overwhelming, since dict are mutable and slower than tuples. On the other hand, the tuples are immutable and lightweight but lack readability for a great number of entries in the data fields. namedtuples are the perfect compromise for the two approaches, the have great readability, lightweightness and immutability (plus they are polymorphic!). Answer #5: named tuples allow backward compatibility with code that checks for the version like this 0:2] (3, 1)sys.version_info[ while allowing future code to be more explicit by using this syntax 3 sys.version_info.minor 1sys.version_info.major Answer #6: namedtuple is one of the easiest ways to clean up your code and make it more readable. It self-documents what is happening in the tuple. Namedtuples instances are just as memory efficient as regular tuples as they do not have per-instance dictionaries, making them faster than dictionaries. from collections import namedtuple Color = namedtuple('Color', ['hue', 'saturation', 'luminosity']) p = Color(170, 0.1, 0.6) if p.saturation >= 0.5: print "Whew, that is bright!" if p.luminosity >= 0.5: print "Wow, that is light" Without naming each element in the tuple, it would read like this: p = (170, 0.1, 0.6) if p[1] >= 0.5: print "Whew, that is bright!" if p[2]>= 0.5: print "Wow, that is light" It is so much harder to understand what is going on in the first example. With a namedtuple, each field has a name. And you access it by name rather than position or index. Instead of p[1], we can call it p.saturation. It’s easier to understand. And it looks cleaner. Creating an instance of the namedtuple is easier than creating a dictionary. # dictionary >>>p = dict(hue = 170, saturation = 0.1, luminosity = 0.6) >>>p['hue'] 170 #nametuple >>>from collections import namedtuple >>>Color = namedtuple('Color', ['hue', 'saturation', 'luminosity']) >>>p = Color(170, 0.1, 0.6) >>>p.hue 170 When might you use namedtuple - As just stated, the namedtuple makes understanding tuples much easier. So if you need to reference the items in the tuple, then creating them as namedtuples just makes sense. - Besides being more lightweight than a dictionary, namedtuple also keeps the order unlike the dictionary. - As in the example above, it is simpler to create an instance of namedtuple than dictionary. And referencing the item in the named tuple looks cleaner than a dictionary. p.huerather than p['hue']. The syntax collections.namedtuple(typename, field_names[, verbose=False][, rename=False]) - namedtuple is in the collections library. - typename: This is the name of the new tuple subclass. - field_names: A sequence of names for each field. It can be a sequence as in a list ['x', 'y', 'z']or string x y z(without commas, just whitespace) or x, y, z. - rename: If rename is True, invalid fieldnames are automatically replaced with positional names. For example, ['abc', 'def', 'ghi','abc']is converted to ['abc', '_1', 'ghi', '_3'], eliminating the keyword 'def'(since that is a reserved word for defining functions) and the duplicate fieldname 'abc'. - verbose: If verbose is True, the class definition is printed just before being built. You can still access namedtuples by their position, if you so choose. p[1] == p.saturation. It still unpacks like a regular tuple. Methods All the regular tuple methods are supported. Ex: min(), max(), len(), in, not in, concatenation (+), index, slice, etc. And there are a few additional ones for namedtuple. Note: these all start with an underscore. _replace, _make, _asdict. _replace Returns a new instance of the named tuple replacing specified fields with new values. The syntax somenamedtuple._replace(kwargs) Example >>>from collections import namedtuple >>>Color = namedtuple('Color', ['hue', 'saturation', 'luminosity']) >>>p = Color(170, 0.1, 0.6) >>>p._replace(hue=87) Color(87, 0.1, 0.6) >>>p._replace(hue=87, saturation=0.2) Color(87, 0.2, 0.6) Notice: The field names are not in quotes; they are keywords here. Remember: Tuples are immutable – even if they are namedtuples and have the _replace method. The _replace produces a new instance; it does not modify the original or replace the old value. You can of course save the new result to the variable. p = p._replace(hue=169) _make Makes a new instance from an existing sequence or iterable. The syntax somenamedtuple._make(iterable) Example >>>data = (170, 0.1, 0.6) >>>Color._make(data) Color(hue=170, saturation=0.1, luminosity=0.6) >>>Color._make([170, 0.1, 0.6]) #the list is an iterable Color(hue=170, saturation=0.1, luminosity=0.6) >>>Color._make((170, 0.1, 0.6)) #the tuple is an iterable Color(hue=170, saturation=0.1, luminosity=0.6) >>>Color._make(170, 0.1, 0.6) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<string>", line 15, in _make TypeError: 'float' object is not callable What happened with the last one? The item inside the parenthesis should be the iterable. So a list or tuple inside the parenthesis works, but the sequence of values without enclosing as an iterable returns an error. _asdict Returns a new OrderedDict which maps field names to their corresponding values. The syntax somenamedtuple._asdict() Example >>>p._asdict() OrderedDict([('hue', 169), ('saturation', 0.1), ('luminosity', 0.6)]) Reference: There is also named list which is similar to named tuple but mutable Answer #7: What is namedtuple ? As the name suggests, namedtuple is a tuple with name. In standard tuple, we access the elements using the index, whereas namedtuple allows user to define name for elements. This is very handy especially processing csv (comma separated value) files and working with complex and large dataset, where the code becomes messy with the use of indices (not so pythonic). How to use them ? >>>from collections import namedtuple >>>saleRecord = namedtuple('saleRecord','shopId saleDate salesAmout totalCustomers') >>> >>> >>>#Assign values to a named tuple >>>shop11=saleRecord(11,'2015-01-01',2300,150) >>>shop12=saleRecord(shopId=22,saleDate="2015-01-01",saleAmout=1512,totalCustomers=125) Reading >>>#Reading as a namedtuple >>>print("Shop Id =",shop12.shopId) 12 >>>print("Sale Date=",shop12.saleDate) 2015-01-01 >>>print("Sales Amount =",shop12.salesAmount) 1512 >>>print("Total Customers =",shop12.totalCustomers) 125 Interesting Scenario in CSV Processing : from csv import reader from collections import namedtuple saleRecord = namedtuple('saleRecord','shopId saleDate totalSales totalCustomers') fileHandle = open("salesRecord.csv","r") csvFieldsList=csv.reader(fileHandle) for fieldsList in csvFieldsList: shopRec = saleRecord._make(fieldsList) overAllSales += shopRec.totalSales; print("Total Sales of The Retail Chain =",overAllSales) Answer #8: In Python inside there is a good use of container called a named tuple, it can be used to create a definition of class and has all the features of the original tuple. Using named tuple will be directly applied to the default class template to generate a simple class, this method allows a lot of code to improve readability and it is also very convenient when defining a class.
https://discuss.dizzycoding.com/what-are-named-tuples-in-python/
CC-MAIN-2022-33
refinedweb
1,494
58.99
You can subscribe to this list here. Showing 4 results of 4 This release adds the following functionality: * Job queueing and notification * umlsh console * uml_mconsole con/ssl handling bugfixes Job handling allows a job to be processed in the background. Currently only one job is performed at any time, so they are queued and performed in turn. The resulting output from a job is e-mailed to the individual who started it, which requires the e-mail address to be the second gecos field in /etc/passwd. umlsh allows UML administrators to access their consoles without requiring a shell on the host system. Users on the UML host should have their shell modified to umlsh, rather than bash or csh, so that they get the umlsh console, rather than a regular Unix shell. Note that this is not currently an administrative console, so users are unable to perform any actions on UMLs through the use of this tool. Currently one needs to modify the script directly to provide it with the secret for accessing the UMLd service, although it will pull it from umld.conf in the future New commands ============ queue <command> * Add a job to the command queue jinfo <jobid> * Get information on a job jlist * Get a list of jobs in command queue jpri <jobid> <newpri> * Change job priority (requires admin access) jdel <jobid> * Delete a job from queue jclean * Clean out old jobs ChangeLog ========= * Background job support * 'create' command queueing * Console interface * IP/host based authentication TODO ===== * Improved support for network subnets * Verify and include iptables commands * Web interface * motd script for users * Clean up daemon response codes * Document protocol Sites ===== UMLd site: UML paper: Official UML site: UML Community site: David -- David Coulson email: d@... Linux Developer / web: Network Engineer phone: (216) 533-6967 This patch updates UML to 2.5.62 and merges in the changes pending in the 2.4 UML pool. Among the changes: More help entries in the config Build fixes, updates, and cleanups ubd driver cleanups and fixes uaccess fencepost fix The 2.5.62-1 UML patch is available at For the other UML mirrors and other downloads, see Other links of interest: The UML project home page : The UML Community site : Jeff Hello! This patch breaks include loop, that breaks uml compile starting from today's bk snapshot of 2.5.62 (asm/processor-generic.h part), also some missed includes popped up, so I fixed those too. Please consider. Bye, Oleg ===== arch/um/drivers/chan_kern.c 1.10 vs edited ===== --- 1.10/arch/um/drivers/chan_kern.c Thu Feb 6 11:13:06 2003 +++ edited/arch/um/drivers/chan_kern.c Wed Feb 19 14:53:19 2003 @@ -8,6 +8,7 @@ #include <linux/list.h> #include <linux/slab.h> #include <linux/tty.h> +#include <linux/string.h> #include <linux/tty_flip.h> #include <asm/irq.h> #include "chan_kern.h" ===== arch/um/drivers/xterm_kern.c 1.5 vs edited ===== --- 1.5/arch/um/drivers/xterm_kern.c Thu Feb 6 11:13:06 2003 +++ edited/arch/um/drivers/xterm_kern.c Wed Feb 19 15:05:32 2003 @@ -6,6 +6,7 @@ #include "linux/errno.h" #include "linux/slab.h" #include "asm/semaphore.h" +#include "linux/signal.h" #include "asm/irq.h" #include "irq_user.h" #include "kern_util.h" ===== arch/um/include/sysdep-i386/checksum.h 1.2 vs edited ===== --- 1.2/arch/um/include/sysdep-i386/checksum.h Tue Feb 18 21:38:27 2003 +++ edited/arch/um/include/sysdep-i386/checksum.h Wed Feb 19 15:14:47 2003 @@ -6,6 +6,7 @@ #define __UM_SYSDEP_CHECKSUM_H #include "linux/string.h" +#include "linux/in6.h" /* * computes the checksum of a memory block at buff, length len, ===== arch/um/kernel/sigio_kern.c 1.3 vs edited ===== --- 1.3/arch/um/kernel/sigio_kern.c Wed Oct 16 10:49:24 2002 +++ edited/arch/um/kernel/sigio_kern.c Wed Feb 19 15:05:27 2003 @@ -6,6 +6,7 @@ #include "linux/kernel.h" #include "linux/list.h" #include "linux/slab.h" +#include "linux/signal.h" #include "asm/irq.h" #include "init.h" #include "sigio.h" ===== include/asm-um/processor-generic.h 1.7 vs edited ===== --- 1.7/include/asm-um/processor-generic.h Thu Feb 6 11:13:08 2003 +++ edited/include/asm-um/processor-generic.h Wed Feb 19 15:04:33 2003 @@ -11,9 +11,7 @@ struct task_struct; #include "linux/config.h" -#include "linux/signal.h" #include "asm/ptrace.h" -#include "asm/siginfo.h" #include "choose-mode.h" struct mm_struct; Jeff Dike wrote: > I think it was more recently than that. "probably about" can be interpreted as "I think maybe, but I might be wrong" > Yup, I thought it was a cool idea then, and I haven't changed my mind since :-) :-) > If you get something reasonable working, I'll move the UML project into one... > > I've been looking for excuses to get off SF, and this would be a fine one. I've got a Dual 1Ghz box sitting doing nothing at my colo. Maybe you would care to make use of some spare CPU cycles? David -- David Coulson email: d@... Linux Developer / web: Network Engineer phone: (216) 533-6967
http://sourceforge.net/p/user-mode-linux/mailman/user-mode-linux-devel/?viewmonth=200302&viewday=19
CC-MAIN-2015-48
refinedweb
858
60.82
(Planet image by David A. Aguilar (CfA), Some rights reserved) Despite the Hadoop hype machine crunching away, not everyone is fond of that little yellow elephant. In fact, some fear it. But why should the cute mammal and the innovative data processing technology that it represents raise anxiety levels? Everyone has their reasons. Hadoop Is Confusing </p> According to a recent Gartner webinar, many people just don’t know what they’re supposed to do with Hadoop. Maybe the exaggerated hype or the fact that it’s a highly technical product has scared people off. Let’s get the story straight. Hadoop is not the end all solution to every data problem. What Hadoop does is store large files over a cluster of commodity servers and process that data in parallel with MapReduce. It can, for example, be used as part of an ETL process or as a storage facility. Having said that, various projects based on Hadoop, such as HBase, add more capabilities and Hadoop’s latest major version opens it up to new applications such as Tez for interactive data or Storm for streaming data. We are likely to see it used in new and exciting ways real soon. Hadoop Is Too New </p> Relative to other technologies Apache Hadoop is the new data store on the block. While it was released in 2005, Oracle SQL has been here since 1978, Microsoft SQL Server since 1989, and MySQL since 1995. Some fear that Hadoop is not ready for production yet because it is still being developed and has yet to fully mature like the other technologies have. Although Hadoop is young, not everyone is afraid to use it. Companies like Facebook, Yahoo, and more than half of the Fortune 50 use Hadoop. Several enterprise level Hadoop distributions such as Hortonworks and Cloudera are available and so is high level support. Hadoop itself is starting to mature with a new major version that was released just a few months ago. It may be named after a child’s toy, but it certainly isn’t one. My Data Is Not Big Enough </p> Just how big is Big Data? Rumor has it that if you do not have petabytes (1015 bytes) of data then there is no point to use Hadoop. ‘Tis true that Hadoop is designed for handling huge volumes of data in great velocity and variety. However, you do not need petabytes of data in order to use Hadoop to your advantage. Processing even just a hundred gigabytes with Hadoop is affordable on the public cloud. The data can be processed only when required on a pay-as-you-go basis, so there is no need to buy and maintain any machines. Prices on Amazon Elastic Map Reduce start from $0.075 an hour. Actually, companies consider Big Data for sizes that are much smaller than petabytes. According to a survey made by NewVantage Partners, only 7% of the 50+ top executives from government and Fortune 500 firms consider using Big Data because they need to analyze more than 1 PB of data. The other 93% have other needs, like analyzing streaming data or data from diverse sources. Altogether, 28% of the survey’s participants consider using Big Data despite having less than 1 PB of data. Even if you do not have much data right now, your company will hopefully grow and so will your data. Soon enough your relational database will not be able to handle the batch processing heat, especially considering the data explosion of social data and the up and coming Internet of things. Once you have terabytes of data you will need to scale and scale again when you reach the petabyte range, so you might as well scale to Hadoop now. Since Hadoop scales horizontally, it means you can start off with a small cluster and add the exact number of machines or cloud instances as you grow. Can’t Replace My IT Department </p> IT departments are trained in the old ways of the SQL Jedi. Getting them to implement and maintain Hadoop requires training or hiring new personnel which are in high demand and expensive. This is true if you run Hadoop on-premise. The maintenance hassle can be avoided by using Hadoop-as-a-Service providers such as Amazon’s Elastic Map Reduce, though it still requires some technical skills to configure and setup a Hadoop environment. Other solutions, like (shameless self promotion) our platform, automate setup and configuration, enable cluster creation, monitoring, and proactive and reactive maintenance with just a few clicks. Hadoop Is Hard to Learn </p> Hadoop works with MapReduce technology that is programmed in Java, yet the common language to query and process data (except for Excel) is SQL. To program MapReduce jobs one must learn not only Java, but a different approach to processing data than relational database querying. Once again, the choice seems to be re-education or re-hiring which are non-options. This fear can be partially resolved. Pig Latin, a high level querying language, and Hive, an SQL like querying language, are both available for processing data on Hadoop without even one line of Java code. They still require some training though and there’s no way to avoid learning MapReduce. Certain platforms, ours included, ease up the learning curve by providing a user interface to process data without any code at all. </p><p>(Fear sign image by Sistak, modified by Xplenty, Some rights reserved)</p> Hadoop Is not Secure </p> Companies have sensitive data. They want to make sure it is fully secure and control who can see it, what they can do with it, and how many resources they can use. Some don’t believe Hadoop provides these features. This is wrong. Hadoop provides user authentication via Kerberos and authorization via file system permissions. YARN also has a new feature called federations which can divide a cluster into several namespaces, thus isolating different sections of the cluster and preventing users from messing with data that does not belong to them. The Apache Accumulo project, a distributed key/value store which runs on top of Hadoop, steps it up a notch by providing access control per cell of data, perfect for security control freaks. Hadoop Has a Single Point of Failure </p> Hadoop uses a server called the NameNode that keeps stock of all the files and where they are saved across the cluster. If the NameNode goes bye bye then the entire cluster and all its files become unusable. Indeed, this was a major weakness in the previous Hadoop version. To prepare for NameNode catastrophes the solution was to continuously back it up to another machine and change the DNS name to point to the backup machine in case of a failure. Fear not, for this is resolved in Hadoop YARN with new high availability features including an up to date standby NameNode and automated failovers. Hadoop Is Too Expensive </p> Some people think that Big Data costs Big Bucks. Whereas relational databases seem affordable, Hadoop seems like a luxury that only enterprises can afford. Actually, the opposite is true - Hadoop is the most cost-effective Big Data solution ever. In the past, only large enterprises could afford proprietary IBM or EMC racks to store and process Big Data, servers that can cost millions of dollars. Hadoop, however, runs on commodity machines which companies of all sizes can afford. Hadoop services on the cloud bring prices down even further by providing clusters with pay-as-you-go models. Hadoop Sucks for Small Files </p> The fear is that Hadoop cannot handle small files in the kilo or megabyte range. And you know what? It’s true. If you store a lot of files that are much smaller than the HDFS block size, 64 MB by default, Hadoop’s performance isn’t going to be good. This is totally solvable. The simple fix, at least for small text files, is to concatenate them into a few huge files. If the files cannot be unified (e.g. image files), Hadoop Archives (HAR files), could do the trick. HAR files represent another filesystem layer on top of HDFS that archives many files into several HAR files via the command line. The archived files can be accessed directly using har:// URLs. Open Source Software Is Unreliable </p> Hadoop is an open source project. Some folks still believe the old myths that open source is not secure, not reliable, and lacks support. Obviously this is not true. Open source projects like the Apache web server and MySQL have proven themselves many times over in terms of security and reliability. Actually, some of the major contributors to Hadoop are big commercial companies like Yahoo, Facebook, and even Microsoft (see, for example, The Stinger Initiative). As mentioned above, enterprise Hadoop distributions and support are also available. Summary </p> Quite a few fears prevent companies from using Hadoop, from lack of knowledge to what Hadoop actually does, through IT concerns, and all the way to mistrust in open source. But many of these fears are based on superstition and those that are not are resolvable. Maybe it’s time for a reality check as to why your organization resists using Hadoop and whether these fears are actually founded.
https://www.xplenty.com/blog/fear-hadoop-planet/
CC-MAIN-2018-30
refinedweb
1,553
61.87
23917/why-are-you-not-able-to-declare-a-class-as-static-in-java Why are you not able to declare a class as static in Java? Declare your class like this: public class eOAuth { private eOAuth(){} public final static int ECodeOauthInvalidGrant = 0x1; public final static int ECodeOauthUnknown = 0x10; public static GetSomeStuff(){} } and you can used without initialization: if (value == eOAuth.ECodeOauthInvalidGrant) eOAuth.GetSomeStuff(); So the static keyword can be used only for variables and functions and not for class? You can use string.indexOf('s'). If the 's' is present in string, ...READ MORE You can use Java Runtime.exec() to run python script, ...READ MORE You can use this method: String[] strs = ...READ MORE Here are two ways illustrating this: Integer x ...READ MORE Java has "static nested classes", but they ...READ MORE As per my knowledge, you are getting this error ...READ MORE Well, the major idea behind this concept ...READ MORE Hello @kartik, If you're serializing just because you ...READ MORE Interfaces are concerned with polymorphism which is ...READ MORE Using nio we can check whether file ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/23917/why-are-you-not-able-to-declare-a-class-as-static-in-java?show=48961
CC-MAIN-2021-39
refinedweb
208
68.06
* A friendly place for programming greenhorns! Big Moose Saloon Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies Register / Login Win a copy of JavaScript Promises Essentials this week in the JavaScript forum! JavaRanch » Java Forums » Java » Threads and Synchronization Author Interview Question -What if the listener deregisters at the same time as when the observable changes Chan Ag Bartender Joined: Sep 06, 2012 Posts: 1049 15 posted Aug 29, 2013 08:57:25 0 Hi, I've recently had a telephonic interview and I quite enjoyed it cause it was more like a discussion. The interviewer asked me a question that I tried to answer, but it seems the interviewer was looking for something else which is a very 'easy to see/analyze' thing that I couldn't even think of. Also during the interview, I had no access to any visible piece of code. We started with different topics and ultimately in one of my answers I mentioned the Observer pattern . So the interviewer started asking me questions about the Observer pattern. In one my answers, I said we could write a modified version of it by making Observable also an interface. The interviewer immediately asked me a question that I had to really think a lot about. He said what if an observer de-registers at the same time as when the observable changes. I said clearly there is a race condition here and we need to synchronize the access, so de-registration cannot happen at the same time as the update. He then asked where would this de-registration code be coded. I said it would be declared in the source interface, implemented by the handler class that would maintain listeners for a source, and it would be invoked by the listener. The listener method that would invoke this code would have to be synchronized with the method which the handler invokes when the observable changes. Then he said there is a very apparent problem here that I have missed. I thought for two more minutes and then said that the handler probably would also have a collection that would store the listeners for a source. It would have a method that would be invoked by the listeners for de-registration and a method that would update all listeners when the observable changes. So it might be possible that after the observable changes, the handler creates an iterator to update all the listeners but the listener might have invoked the de-registration code after the iterator is created by the handler. So if the synchronization is not correct, the iterator might throw a ConcurrentModificationException. It seems I still hadn't even touched upon what the interviewer was looking for. And it was just a telephonic round with us having no access to any visible code so it was hard for me to remember how everything functions and to imagine and answer. But what is it I must be missing here? It must be a widely known thing cause he said there is clearly a problem that I hadn't even mentioned. What is it I may have missed? Was anything I said incorrect? Thanks for your help, Chan. Steve Luke Bartender Joined: Jan 28, 2003 Posts: 4181 21 I like... posted Aug 29, 2013 11:10:52 1 Trying to guess what someone was thinking is impossible (especially when we don't know the specifics about the conversation). If he didn't share his apprehension then I wouldn't worry about it too much. My thought when reading your description would be about throughput, I could imagine if multiple notifications came in then they would all be competing with each other for the synchronized lock, and could slow the system down. I might consider using a ReadWriteLock so as to let multiple notifications to happen without collision, but removing a listener (or adding a new one) would be a Write operation and would be isolated from other access. You could also use the 'snapshot' view of the listener collection, like a CopyOnWrite type of collection, if you suppose adding and removing listeners aren't very frequent. Yet another approach is to force all notifications and listener registration/de-registration to happen on a single dispatching thread so collisions are impossible. Every registration, deregistration, and notification signal would put a task in a queue, and the dispatcher thread would consume the queue executing all the tasks in the order they entered it. Your observable pattern would then mimic the EDT thread in GUIs with all the associated suggestions to keep processing short or push long processing off to worker threads to prevent notification backup. But it really all depends on the environment and system, really. Steve Maxim Karvonen Ranch Hand Joined: Jun 14, 2013 Posts: 104 11 posted Aug 29, 2013 12:19:15 1 Hi, all. Chan, you second guess is somewhat right. Listener manipulations may be called during listeners notification phase. But this does not require multiple threads: public class KamikazeListener implements ObserverListener { private Observable observable; public KamikazeListener(Observable o) { o.addListener(this); } @Override public void onObserverChanged(ChangeEvent e) { o.removeListener(this); } } // -------- public static test() { SomeObservable o = new SomeObservable(); new KamikazeListener(o); new KamikazeListener(o); o.doSomethingThatChangesObservable(); } Not a complete code, just an idea for the problem. You can see that no other threads are involved in a problem. No locks (and other synchronization-related stuff) can help. This is not an "incorrectly-synchronized" class. This is a "non-reenterant" class. More common "non-reenterable" methods are methods which cannot (or should not) be called while executing (no direct or indirect recursion). And this is an example of non-reenterant class. It is unsafe to invoke other methods while one method is executed (even in a same thread!). Such classes is a real problem to deal with even in some single-threaded environments (notably javascript). Also this problem often occurs in UI stuff (remove display and detach it from the model after some change). Steve answer is correct when dealing with multiple threads. But not all solutions solve reenterability problem. However, that solutions (for a thread-safety) may be used alongside some solutions for reenterability. Some solutions for reenterability problem: 1. Provide a way to run some code after all listeners are notified. For example, register runnable in the event and call them after all listeners are notified. 2. Clone list of listeners each time before the iteration and iterate on clone, not an original list. Swing uses this approach. Look into javax.swing.event.EventListenerList for future details. 3. Hold another queue for "deferred actions". Mark object "unsafe" during change. When listener manipulation is requested in "unsafe" state, register change in the "deferred actions" queue instead. Run all deferred actions after original update is complete. Something like "last chance" solution but more specialized and with completely different API. 4. Use CopyOnWrite collection. 5. Use "event-dispatching" approach and require to perform listener manipulations in it's own transaction. Key point is that no deregistration is allowed in the listener. Listener should put another task in event queue to deregister itelf. Note that solutions 1-3 are not thread-safe by itself. So you also need to choose some approach to threading problem (one thread for all handlers or proper locking). Solution 4 is good for both problems. I use different solutions in different situations. Usually I prefer CopyOnWrite collections in Java , "action queues" for general listeners in JS and "last-chance" listeners for special cases like reactive programming (functions over observables). Chan Ag Bartender Joined: Sep 06, 2012 Posts: 1049 15 posted Aug 30, 2013 06:21:17 0 Thanks, Steve and Maxim, for your responses. Yea, I felt weird also, while answering this question because it all depends on the implementation. We can have a pattern implemented in many ways. A pattern does not state what data structures to use, and it is not a definitive guide about the subset of operations that can be performed on the objects ( it is just a guideline and it knows nothing about our business requirement ) and what degree of parallelism can be obtained. The worst part was I was not left to myself to figure the answer; I had the phone in one hand with a continuous voice sort of disturbing my ability to think ( though yes he was only trying to give me hints and help me arrive at the specific points he was looking for ), and a paper and pencil in the other hand to jot down the points. Clearly the approach didn't work. I did mention CopyOnWrite collections to him, but it seems he had some other aspect in mind. Perhaps it must be performance or handling of extraneous notification ( I thought the client code would handle 'em well and they shouldn't be an issue worth discussing ), something to do with the event dispatching thread of Swing/graphics APIs. I have no knowledge/experience of Swing and graphics APIs so I wouldn't know about it. If he was expecting an answer related to event dispatching thread, I wouldn't know. I read about reentrancy - . It was interesting. We have been knowingly or unknowingly employing means to make sure that we don't call the methods of a non-reenterant class in an unsafe manner. The updateListeners and register and remove listenerens is an example of this. A mention of the keyword re-entrancy could also be something he was looking for. Ok, so based on your responses, it looks like I may not have excluded some very widely known, biggie thing from my answer. I understand though that all of the above are also very significant aspects when we are working on critical projects where inaccuracies can be fatal. With this in my mind, if I come across a related task in future, I will try to think of these points too - at least I know there is more to research on if I'd work on advanced stuff ever. I understand it feels weird to answer a question which is based on another person's apprehensions and hence I wasn't expecting speedy responses/responses at all. But like always, you've been a magic help. Thanks so much. I agree. Here's the link: subject: Interview Question -What if the listener deregisters at the same time as when the observable changes Similar Threads Writing programs on interviews Falling Standards? How to take an interview ? OOPS Question Java Interview Question - Default atomicity of Java statements. All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/618959/threads/java/Interview-listener-deregisters-time-observable
CC-MAIN-2014-49
refinedweb
1,775
62.78
Gatsby is a free and open-source framework based on React that helps developers build blazing-fast websites and apps. It allows you to build apps using modern web technologies with no headaches at all. Gatsby also comes shipped with features like server-side rendering and the ability to generate static progressive web apps (PWAs). You get code and data splitting out-of-the-box. Gatsby loads only the critical HTML, CSS, data, and JavaScript so your site loads as fast as possible. TypeScript is a superset of JavaScript which primarily provides optional static typing, classes, and interfaces. TypeScript lets you catch bugs early thereby making you a more efficient developer, while at the same time leveraging the already existing JavaScript community. In this article, we’ll take one of the starter Gatsby projects and turn it into a TypeScript app. How to set up a Gatsby app To set up the Gatsby app, you’ll first need to install the Gatsby CLI. The Gatsby CLI tool lets you quickly create new Gatsby-powered sites and run commands for developing Gatsby sites. It can be installed globally by running the command below. yarn global add gatsby-cli Once that’s done, the next step would be to create a Gatsby site. As mentioned above, we’ll be using the Gatsby CLI tool to create your first Gatsby site and we’ll be grabbing one of the starter projects to start with – the Gatsby blog starter. The starter ships with the main Gatsby configuration files you might need to get up and running and also uses GraphQL to fetch the data for the blog. gatsby new my-blog-starter The command above pulls all the files needed for the blog into a folder and then installs all the required dependencies for the app to work. You can navigate into your new site’s directory and start it up to see if everything works as expected. Run the following commands in your terminal. cd my-blog-starter yarn develop Once the yarn develop command is completed, you can open up a new tab in your browser and navigate to to see the Gatsby app in action. Gatsby and GraphQL In case you didn’t know what GraphQL is, GraphQL is a query language that allows you to pull data into your website. It’s the interface that Gatsby uses for managing site data. You can find out more about GraphQL here. Before we see how Gatsby easily incorporates GraphQL, let’s take a look at some of the files and folders that ships with Gatsby and what they do. /src: This directory will contain all of the code related to what you will see on the front-end of your site (what you see in the browser) such as your site header or a page template. . Although I should mention that Gatsby also supports other methods of fetching data i.e you don’t have to use GraphQL. You can use the createPages API to pull unstructured data into Gatsby pages directly, rather than through the GraphQL data layer. So how does Gatsby fetch data from GraphQL? We can do that by using two methods: Page Query and Static Query. Page query Querying data in pages with GraphQL can be done by using Gatsby’s graphql tag. In the gatsby-config file, at the top of the file (lines 2-10), there’s a bit where information about the site is declared, specifically the siteMetadata object. siteMetadata: { title: `Gatsby Starter Blog`, author: `Kyle Mathews`, description: `A starter blog demonstrating what Gatsby can do.`, siteUrl: ``, social: { twitter: `kylemathews`, }, }, We’re going to see how to fetch the data above in a component using GraphQL. The 404.js file in src/pages folder is an example of how to use page queries to fetch data using GraphQL. If you scroll down to the bottom of the file (lines 24-32), you should see a snippet similar to the one below. export const pageQuery = graphql` query { site { siteMetadata { title } } } ` The code snippet above is a GraphQL query and we are fetching the site’s siteMetadata info, specifically, the title of the file. The data retrieved from the query above can then be used in the render function of the component as seen below in the same file. const { data } = this.props const siteTitle = data.site.siteMetadata.title StaticQuery Querying data in Gatsby using StaticQuery can be done by using StaticQuery, a new API that allows components to retrieve data via GraphQL query. In addition to using StaticQuery, we’ll also use the hooks version of StaticQuery called useStaticQuery. An example of a component using StaticQuery is the seo.js file located in the src/components folder. import { useStaticQuery, graphql } from "gatsby" // ... function SEO({ description, lang, meta, title }) { const { site } = useStaticQuery( graphql` query { site { siteMetadata { title description author } } } ` ) // ... } As you can see in the code block above, we first import the useStaticQuery method from gatsby. The useStaticQuery method allows your React components to retrieve data via a GraphQL query that will be parsed, evaluated, and injected into the component. The GraphQL query above queries for the site title, description, and author from the gatsby-config.js file. How to integrate TypeScript into a Gatsby app To integrate TypeScript into the Gatsby app, we’ll need to install some dependencies. yarn add gatsby-plugin-typescript yarn add typescript --dev gatsby-plugin-typescript is the Gatsby plugin required to be able to change all .js files to .tsx files. It’s responsible for type checking and transpiling TypeScript code. Once the dependencies are installed, we can add gatsby-plugin-typescript to the gatsby-config.js file (lines 69 – 71). gatsby-plugin-offline, gatsby-plugin-react-helmet, gatsby-plugin-typescript, Next, add tslint.json and tsconfig.json to the root directory of the codebase. tslint.json contains the set of rules that will help with linting TypeScript files and tsconfig.json contains the set of rules that help determine the behavior of TypeScript in the project. // tslint.json { "extends": ["tslint-react"], "rules": { "prettier": true, "jsx-no-multiline-js": false, "jsx-no-lambda": false, "import-name": false, "no-boolean-literal-compare": false } } // tsconfig.json { "compilerOptions": { "module": "commonjs", "target": "esnext", "jsx": "preserve", "lib": ["dom", "esnext"], "strict": true, "noEmit": true, "isolatedModules": true, "esModuleInterop": true, "noUnusedLocals": false, "allowJs": true }, "exclude": ["node_modules", "public", ".cache"] } One more thing to do would be to add a script to the package.json file that we can use to carry out type checking across the whole codebase. "scripts": { // ... "type-check": "tsc --noEmit" } We can now go about renaming the necessary .js files to .tsx files. In addition to renaming the files, we’d also be converting the React components from class components to functional components. bio.js Let’s start with the bio.js file. Rename the file or simply just change the extension so that it’s now src/components/bio.tsx. layout.js Next up is the layout.js file. Change the extension to .tsx and then open the file and edit with the code snippet below. import React from "react" import { Link } from "gatsby" import { rhythm, scale } from "../utils/typography" interface Props { location: Location title: string children?: any } const Layout = ({ location, title, children }: Props) => { const rootPath = `${__PATH_PREFIX__}/` let header if (location.pathname === rootPath) { header = ( <h1 style={{ ...scale(1.5), marginBottom: rhythm(1.5), marginTop: 0, }} > <Link style={{ boxShadow: `none`, textDecoration: `none`, color: `inherit`, }} to={`/`} > {title} </Link> </h1> ) } else { header = ( <h3 style={{ fontFamily: `Montserrat, sans-serif`, marginTop: 0, }} > <Link style={{ boxShadow: `none`, textDecoration: `none`, color: `inherit`, }} to={`/`} > {title} </Link> </h3> ) } return ( <div style={{ marginLeft: `auto`, marginRight: `auto`, maxWidth: rhythm(24), padding: `${rhythm(1.5)} ${rhythm(3 / 4)}`, }} > <header>{header}</header> <main>{children}</main> <footer> © {new Date().getFullYear()}, Built with {` `} <a href="">Gatsby</a> </footer> </div> ) } export default Layout In the code block above, the class component was converted into a functional component and an interface was created for the component’s props. Interfaces in TypeScript help to shape the specifications of an entity. It usually contains the name of all the properties along with their types. seo.js Next up is the seo.js file. Change the extension to .tsx and then open the file and edit with the code snippet below. /** * SEO component that queries for data with * Gatsby's useStaticQuery React hook * * See: */ import React from "react" import Helmet from "react-helmet" import { useStaticQuery, graphql } from "gatsby" interface Props { description?: string lang?: string meta?: [] title: string } const SEO = ({ description, lang, meta, title }: Props) => { const { site } = useStaticQuery( graphql` query { site { siteMetadata { title description author } } } ` ) const metaDescription = description || site.siteMetadata.description return ( <Helmet htmlAttributes={{ lang, }} title={title} titleTemplate={`%s | ${site.siteMetadata.title}`} meta={[ { name: `description`, content: metaDescription, }, { property: `og:title`, content: title, }, { property: `og:description`, content: metaDescription, }, { property: `og:type`, content: `website`, }, { name: `twitter:card`, content: `summary`, }, { name: `twitter:creator`, content: site.siteMetadata.author, }, { name: `twitter:title`, content: title, }, { name: `twitter:description`, content: metaDescription, }, ].concat(meta || [])} /> ) } export default SEO As it was done in the layout.tsx file, we’ve also defined an Interface here for the component’s props. 404.js Next up is the 404.js file. Navigate to the src/pages folder, change the extension to .tsx and then open the file and edit with the code snippet below. import React from "react" import { graphql } from "gatsby" import Layout from "../components/layout" import SEO from "../components/seo" interface Props { data: { site: { siteMetadata: { title: string } } } } const NotFoundPage = ({ data }: Props) => { const siteTitle = data.site.siteMetadata.title return ( <Layout location={window.location} title={siteTitle}> <SEO title="404: Not Found" /> <h1>Not Found</h1> <p>You just hit a route that doesn't exist... the sadness.</p> </Layout> ) } export default NotFoundPage export const pageQuery = graphql` query { site { siteMetadata { title } } } ` In the code block above, an Interface was defined for the component’s prop and the class component was converted to a functional component. index.js Next up is the index.js file. Change the extension to .tsx and then open the file and edit with the code snippet below. import React from "react" import { Link, graphql } from "gatsby" import Bio from "../components/bio" import Layout from "../components/layout" import SEO from "../components/seo" import { rhythm } from "../utils/typography" interface Props { data: { allMarkdownRemark: any site: { siteMetadata: { title: string } } } } const BlogIndex = ({ data }: Props) => { const siteTitle = data.site.siteMetadata.title const posts = data.allMarkdownRemark.edges return ( <Layout location={window.location} title={siteTitle}> <SEO title="All posts" /> <Bio /> {posts.map(({ node }) => { const title = node.frontmatter.title || node.fields.slug return ( <div key={node.fields.slug}> <h3 style={{ marginBottom: rhythm(1 / 4), }} > <Link style={{ boxShadow: `none` }} to={node.fields.slug}> {title} </Link> </h3> <small>{node.frontmatter.date}</small> <p dangerouslySetInnerHTML={{ __html: node.frontmatter.description || node.excerpt, }} /> </div> ) })} </Layout> ) } export default BlogIndex export const pageQuery = graphql` query { site { siteMetadata { title } } allMarkdownRemark(sort: { fields: [frontmatter___date], order: DESC }) { edges { node { excerpt fields { slug } frontmatter { date(formatString: "MMMM DD, YYYY") title description } } } } } ` As seen in previous code blocks, an Interface was defined for the component’s prop and the class component is now a functional component. blog-post.js For the blog-post.js file in src/components, we’ll have to convert to a functional component and also defined an Interface for the props. Change the extension to .tsx and then open the file and edit with the code snippet below. import React from 'react' import { Link, graphql } from 'gatsby' import Bio from '../components/bio' import Layout from '../components/layout' import SEO from '../components/seo' import { rhythm, scale } from '../utils/typography' interface Props { data: { markdownRemark: any site: { siteMetadata: { title: string } } } pageContext: any } const BlogPostTemplate = ({ data, pageContext }: Props) => { const post = data.markdownRemark const siteTitle = data.site.siteMetadata.title const { previous, next } = pageContext return ( <Layout location={window.location} title={siteTitle}> <SEO title={post.frontmatter.title} description={post.frontmatter.description || post.excerpt} /> <h1 style={{ marginTop: rhythm(1), marginBottom: 0, }} > {post.frontmatter.title} </h1> <p style={{ ...scale(-1 / 5), display: `block`, marginBottom: rhythm(1), }} > {post.frontmatter.date} </p> <div dangerouslySetInnerHTML={{ __html: post.html }} /> <hr style={{ marginBottom: rhythm(1), }} /> <Bio /> <ul style={{ display: `flex`, flexWrap: `wrap`, justifyContent: `space-between`, listStyle: `none`, padding: 0, }} > <li> {previous && ( <Link to={previous.fields.slug} ← {previous.frontmatter.title} </Link> )} </li> <li> {next && ( <Link to={next.fields.slug} {next.frontmatter.title} → </Link> )} </li> </ul> </Layout> ) } export default BlogPostTemplate export const pageQuery = graphql` query BlogPostBySlug($slug: String!) { site { siteMetadata { title author } } markdownRemark(fields: { slug: { eq: $slug } }) { id excerpt(pruneLength: 160) html frontmatter { title date(formatString: "MMMM DD, YYYY") description } } } ` typography.js For this file, we’ll be changing the extension to .ts since we won’t be returning any markup at all. Navigate to the src/utils folder and change the extension to .ts. Now, if you try to run the app at localhost:8000 now, you should see some typing errors. This is because some packages that we use might not come with a type definition. For those, we’ll have to declare these types ourselves in our project. To declare these types, create a node_modules.d.ts in the root directory of the app and then edit it with the code snippet below. // node_modules.d.ts declare module "typography" declare module "typography-theme-wordpress-2016" declare module "react-helmet" This file declares TypeScript modules/types for node modules that don’t yet have TypeScript declaration files of their own. One final thing to do before running the app again would be to edit the gatsby-node.js file. There’s a bit in that file where the path of the component to be used in rendering the blog post page is declared. We’ll need to change the extension of that file from .js to .tsx. Open up the gatsby-node.js file (line 7) and edit with the code below. const blogPost = path.resolve(`./src/templates/blog-post.tsx`) If that’s done, restart the Gatsby app and try checking out the blog at localhost:8000 and it should be working fine now! Conclusion In this article, we looked at Gatsby and it’s usage with GraphQL and TypeScript. Gatsby is a static site generator that ships with features like server-side rendering and being able to generate static Progressive Web Apps. It allows you to build modern and performant sites with HTML, CSS and React. TypeScript is a superset of JavaScript which primarily provides optional static typing, classes, and interfaces. In this article, you saw how to use Interfaces to structure data (props) for components. You also saw how to fetch data in Gatsby apps by using GraphQL, which can either be page queries or StaticQuery. Lastly, in this article, you saw how to use the gatsby-typescript-plugin to integrate TypeScript into the existing codebase. You can find the repository for the Gatsby “How to set up a TypeScript + Gatsby app” Hi i am sorry if this a dumb question, but i am getting a cannot find ‘__PATH_PREFIX__’ in the layout.tsx. Is this something we are supposed to replace with a value, or does gatsby replace this value? When i clone the repo, this value doesnt seem to play well with typescript. Because TSLint is being deprecated, can you please update this article to use ESLint for Typescript? Thank you.
http://blog.logrocket.com/set-up-a-typescript-gatsby-app/
CC-MAIN-2019-43
refinedweb
2,529
57.37
Hi guys, being a beginner I have some trouble with pointers. I don't know but something about using pointers is just not getting clear to me. My assignment requires that from an array of given values, the user inputs a particular integer whose every occurrence with in the array should be deleted and the size of the array should then be reduced respectively. I tried but I am not getting how can I accomplish this with a pointer. Here is the my code: // HW5_Bhasin.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include<iostream> using namespace std; void Erase(int [], int* ,int* ); int main() { char choice; const int max = 5; int a[max]; int itemCount=0, SearchElement=0; cout<<"Please select from the following menu."<<endl; cout<<" E(Erase Array Content)\tC(Count Words)\tR(Reverse Words)\tQ(Quit)"<<endl; cin>> choice; cin.ignore(); switch(choice) case 'E': case 'e': Erase(a, &itemCount, &SearchElement); return 0; } void Erase( int ar[], int *item, int *Search)// this function is suppose to delete every occurrence of the digit input by the user.the item is suppose to keep the count of the array size search is the item to be deleted. { int digit; cout<<"Please enter the number whose all occurrence needs to be deleted."<<endl; cin<< digit; Search = &digit; cout<<"Erase this number"<<endl; return; } Thanks.
https://www.daniweb.com/programming/software-development/threads/155626/deleting-an-index-of-array-using-pointers
CC-MAIN-2018-05
refinedweb
229
62.78
Floor division is a normal division operation except that it returns the largest possible integer. This integer is either less than or equal to the normal division result. Floor function is mathematically denoted by this ⌊ ⌋symbol. Let’s understand this concept through the slides below: Several programming languages have a specific built-in function or operator for calculating floor division. floor()method is used in C++ Math.floor()is used in Java //operator is used in Python #include <iostream> //include math library #include <cmath> using namespace std; int main() { // floor() method is used. cout << "Floor of 36/5 is " << floor(36/5) << endl; return 0; } RELATED TAGS View all Courses
https://www.educative.io/answers/floor-division
CC-MAIN-2022-33
refinedweb
110
56.66
Internet Speed Test using Python Hello there, I am attending a Technical Writing Bootcamp at @hashnode. Hashnode Bootcamp III is a free virtual Bootcamp to help beginner technical writers to improve their writing skill. This article is inspired by the latest session by Sam Julien, @samjulien TASK: Write 1 TIL format blog post on Hashnode to test out your own content system from drafting all the way to promoting. Running a speed test can be very useful to verify the current state of an internet connection. In this Blog article, we will learn how to Test Internet Speed. We will see the implementation in Python. Check out the Repository for Ultimate Resource in python. Drop a star if you find it useful! Got anything to add? Open a PR on the same! Let's get started. What will be covered in this Blog 1. speedtest Introduction 2. How to Test internet speed Speedtest Introduction: The internet connections in our homes and offices can differ by internet service providers (ISPs), allowable traffic limit, and most importantly speed. So what do you usually do when you want to test the speed our connection? You google it, right? How about testing the internet speed using Python from your machine! Speedtest cli library provides Command line interface for testing internet bandwidth using speedtest.net If you wish to know more about it, you can refer to Speedtest Documentation. Use this link to navigate to the documentation. Now that you are aware of Speedtest basics, we can move forward to the coding section. Let's get started! Time to Code! You can find all the code at my GitHub Repository. Drop a star if you find it useful. In order to access the Python library, you need to install it into your Python environment, use the following command to install Speedtest pip install speedtest-cli Now, let's import the package in our Python script. import speedtest Let's create an instance of Speedtest and call it st st = speedtest.Speedtest() Let's move forward and check for download speed. We will make use of download method to fetch the speed and store in d_st d_st = st.download() Similarly, to check for the upload speed, we will make use of upload method to fetch the speed and store in u_st u_st = st.upload() Once done, let's display the download and upload speed. print("Your Download speed is", d_st) print("Your Upload speed is", u_st) Let's have a look at the output: Your Download speed is 4786516.020362539 Your Upload speed is 851493.59380959 It will look something like this. Here's the download and upload speed in bits. BONUS: Let's check for ping. We can do so by making use of the following command. st.get_servers([]) Let's fetch the ping and store it in ping, we will make use of results.ping for the same. ping = st.results.ping #display the result print("Your Ping is", ping) Let's display the ping using Your Ping is 50.846 This is how you can test your Internet speed. That's it! Simple, isn't it? Hope this tutorial has helped.You can play around with the Speedtest library and even explore more features. You can even make use of Python GUI using Tkinter. You can find all the code at my GitHub Repository. Drop a star if you find it useful..
https://ayushi7rawat.hashnode.dev/internet-speed-test-using-python?guid=none&deviceId=ac498df2-685e-4926-8c0b-0daf1c796938
CC-MAIN-2021-31
refinedweb
569
76.72
The following thread appeared in the forum, pointing out that the current drawn through the VB pin (RTC battery back up pin) was observed to be a lot higher than expected. Specifically, the report was 175uA, whereas the datasheet for the LPC1768 suggests it should be around 27uA. The forum thread in question is here : The first experiment was to make some measurements of my own to confirm what was being seen. I connected the VB pin of my mbed to 3.3v supply via 1k resistor, as the original poster had. This gives me a drop of 1mV per uA, so at the reported 175uA I should be seeing 175mV drop across the resistor - easily enough to measure. The results came in : This was nearly twice the figures seen by the original poster! Seeing the numbers above triggered a little investigation. The original mbed design was based on the LPC2368, and very little changed on the design when we moved to the LPC1768, as it was essentially a pin for pin replacement. On the LPC2368, the RTC was powered completely separately. If the RTC was required, Vbat had to be powered, either from the main 3.3v supply, or a battery. Not wanting to run down a battery uneccessarily, a pair of ultra low Vf diodes were put on the board. Expecting a 3.0v coin cell to be used, the arrangement would allow the RTC to be powered from the main 3.3v supply while it was present, and then switch to the 3v battery when the main supply was removed. The part of the circuit in question is shown below. The side effect of this is that when the main power is removed, current will leak through D3 and sink into the rest of the circuit. This isnt enough to power it, but it is enough to notice a significant increase on the expected 27uA. However, in the newer LPC1768 design this power switching is now built into the MCU itself. If power to the rest of the device is present the RTC will select the main supply to conserve the battery. When the main supply is not present, the RTC will source from the Vbat pin. Essentially, the diode circuit above is no longer needed. For someone using the mbed NXP LPC1768 and wanting back up the RTC with a coin cell, the easiest fix is to simply remove D3. This will mean there is no path to leak current from the battery back to the main supply rail. Taking a look at the UNDERSIDE of the PCB, D3 is near the at the bottom, and is the diode closest to the "Pb" symbol, highlighted in bright white. On future build of mbed this diode will not be fitted at the manufacturing stage, For now, it can be easily removed if you are confident with a soldering iron. If you intend to give this a try, here is the method I used: After Sucessfully removing D3, it is time to check the result. Using the same test circuit, a 1k resistor to give 1mV per micrcoamp, we measure the current drawn through VB both with and without the USB power plugged in. With USB power applied the current draw is down to 20uA, and this goes up to 40uA when USB power is removed. For now this is massive saving, but some further investigation with some higher accuracy measurements is needed. function In the mean time, it is necessary to check the function of the RTC. I wrote a very simple test program to set the RTC, then print it's value over USB to a terminal in a loop. #include "mbed.h" int main() { while (1) { time_t seconds = time(NULL); printf("It is %d seconds since January 1, 1970\n", seconds); wait (1.0); } } With the test program compiled and running, I took the following steps: The initial results measured above were a good first approximation, but some better measurements were needed. So the tests were repeated using a 10k resistor, giving 0.1uA/mV sensitivity, which led to the following results: Encouraged by this, I repeated again with a 100k resistor, giving 0.01ua/mV So it seems that upping the sensitivity of the reading has made us converge on the figures, which are 1.4uA when the LPC1768 is powered, and 4.3uA from the battery. This is still a lot higher than the 0.8uA quoted in the NXP datasheet, but in the abscence of high precision, high sensitivity measuring equipment, the measurements are enough to satisfy us that we can expect a long backup time from a coin cell. Taking another look at the circuit diagram, you'll spot that a second diode was needed inthe original circuit. With D3 removed, D4 no longer needs to be a diode, but it is doing any harm? The assessment here is that it is not necessary to have a diode there, and in the future, we may be able to fit D4 with a zero ohm link. In the mean time, the effect of the diode is that the Vbat pin of the LPC1768 is being supplied at one diode drop below VB. The diode used was chosen for its ultra low Vf characteristic, the data sheet can be found at : Looking at the If vs Vf graph on page 3, at 25^C which I take as room teperature, a forward current of 30uA (3 x 10^-5) we can expect a Vf of less than 100mV, I read it as about 30mV. So while the diode isn't necessary, it appears from the datasheet that it isnt doing any harm. The higher than expected current drain through Vb is due to leakage in a circuit that was neccessary for the LPC2368 mbed, but it no longer needed withthe LPC1768. Future mbed builds will not have this diode fitted, but for those who wish to reduce thier Vb consumption and are competant with a soldering iron, it is a fairly straight forward operation. Please log in to post a comment. I looked at the drawing and the board and was puzzled for a while before I realized I should look at the bottom of the board. You might want to mention that.
http://mbed.org/users/chris/notebook/rtc-battery-backup-current/
crawl-003
refinedweb
1,050
77.16
CodePlexProject Hosting for Open Source Software I will officially give someone a cookie if they can get Json.Net into WinRT. I will when VS2012 is RTM, although that won't be until halfway through next year. You're welcome to do it yourself and submit the source code. Beyond changing some namespace declarations and some changes in class names there's not much of a diff from Pre Metro to post Metro. Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://json.codeplex.com/discussions/273145?ProjectName=json
CC-MAIN-2013-48
refinedweb
109
84.98
== operator.... Ganeshkumar cheekati Ranch Hand Joined: Oct 13, 2008 Posts: 362 posted Dec 11, 2008 03:03:00 0 SOURCE: class MWC109 { public static void main(String args[]) { String a = "A", b = "B", c = a+b, d = a+b; System.out.print(([B](a+b)==(a+b)[/B]) + ","); System.out.print([B](c==d)[/B] + ","); System.out.print(c.equals(d)); }} output is:false false true a,b,c,d are the objects which referenced are in string constant pool.. so c and d are refering to the same object as well as (a+b) also refers to the same object.. then why 1st and 2nd is false....? SCJP5 and SCWCD1.5 Think Twice Act Wise... Punit Singh Ranch Hand Joined: Oct 16, 2008 Posts: 952 posted Dec 11, 2008 03:17:00 0 Means c = a+b, d = a+b both time new object created by JVM, so they are different. SCJP 6 Ganeshkumar cheekati Ranch Hand Joined: Oct 13, 2008 Posts: 362 posted Dec 11, 2008 03:35:00 0 but after creating object d it check whether it is available in string constant pool or not it is available so object d also pointing to the same which is referenced by object c.javascript: x() James Tharakan Ranch Hand Joined: Aug 29, 2008 Posts: 580 I like... posted Dec 11, 2008 03:47:00 0 I am not sure to what extent i am right. But i think i will share it. when we say, c=a+b, it will call, concat() of string class or append() of StringBuffer class (internally it would look like this:- new StringBuilder ().append(a).append(b).toString(); ) ( not sure which one of these) Where in both of them returns a new string. [ December 11, 2008: Message edited by: James Tharakan ] [ December 11, 2008: Message edited by: James Tharakan ] SCJP 6 Why to worry about things in which we dont have control, Why to worry about things in which we have control ! ! Himanshu Gupta Ranch Hand Joined: Aug 18, 2008 Posts: 598 I like... posted Dec 11, 2008 04:44:00 0 Whenever we make a String object by doing some operation on the Strings then we will always end up with new String Object. String class has overridden hashCode() and equals() method so meaningfully the String objects may be equal and they will return the same hash code but then also they are situated at different memory location in the Objcet heap.. src The following example which I prepared by altering the above question will simplify my point. public class MWC109 { public static void main(String args[]) { String a = "A", b = "B", c = a+b, d = a+b; System.out.println("a+b hashcode: " + (a+b).hashCode()); System.out.println("c hashcode: " + c.hashCode() ); System.out.println("d hashcode: " + d.hashCode()); System.out.println(); System.out.println("c system hash code: " + System.identityHashCode(c) ); System.out.println("d system hash code: " + System.identityHashCode(d) ); System.out.println(); for (int i =0; i<10; i++){ System.out.println(i + ") a+b System HashCode: " + System.identityHashCode((a+b))); } System.out.println(); String str1 = "POps"; String str2 = "POps"; System.out.println("str1 System HashCode: " + System.identityHashCode(str1)); System.out.println("str2 System HashCode: " + System.identityHashCode(str2)); System.out.println("str1 hashcode: " + str1.hashCode()); System.out.println("str2 hashcode: " + str2.hashCode()); System.out.println(); for (int i=0; i<10; i++){ System.out.println(i + ") str1+str2 System HashCode: " + System.identityHashCode(str1+str2)); } System.out.println(); System.out.print(((a+b)==(a+b)) + ","); System.out.print((c==d) + ","); System.out.println(c.equals(d)); }} The output of the above code is a+b hashcode: 2081 c hashcode: 2081 d hashcode: 2081 c system hash code: 4072869 d system hash code: 1671711 0) a+b System HashCode: 11394033 1) a+b System HashCode: 4384790 2) a+b System HashCode: 9634993 3) a+b System HashCode: 1641745 4) a+b System HashCode: 11077203 5) a+b System HashCode: 14576877 6) a+b System HashCode: 12677476 7) a+b System HashCode: 33263331 8) a+b System HashCode: 6413875 9) a+b System HashCode: 21174459 str1 System HashCode: 827574 str2 System HashCode: 827574 str1 hashcode: 2462786 str2 hashcode: 2462786 0) str1+str2 System HashCode: 17510567 1) str1+str2 System HashCode: 27744459 2) str1+str2 System HashCode: 28737396 3) str1+str2 System HashCode: 6927154 4) str1+str2 System HashCode: 24355087 5) str1+str2 System HashCode: 5442986 6) str1+str2 System HashCode: 10891203 7) str1+str2 System HashCode: 9023134 8) str1+str2 System HashCode: 19336051 9) str1+str2 System HashCode: 6336176 false,false,true So it is visible that whenever we make some String object by doing some alteration we will always end up in a new Object in heap. So the system HashCode will be different for them as no two Objects can share the same memory. The hashcodes we get from the String Objects return the hashcode generated by this function as it is overridden in String class. public int hashCode() { int h = hash; if (h == 0) { int off = offset; char val[] = value; int len = count; for (int i = 0; i < len; i++) { h = 31*h + val[off++]; } hash = h; } return h; } To get the actaul hashcode we used the static function identityHashCode() of System class. The system hashcode for str1 and str2 are same as those objects were created without any alteration on any of the objects. My Blog SCJP 5 SCWCD 5 I agree. Here's the link: subject: == operator.... Similar Threads String constant pool?? string question .explanation pls Inherited Method Calls from Mock New challenge equals == hashcode(m confused) final String All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/418197/java-programmer-SCJP/certification/operator
CC-MAIN-2015-22
refinedweb
952
60.85
. Python and Microsoft Excel If you’ve looked for examples of using Python and Office, you’ll usually find that the most often hacked component is Excel. In fact, there are several non-PyWin32 modules specifically created to read and write Excel files. They are called xlrd and xlwt, respectively. But that’s a topic for another article. Here we’ll see how to mess with Excel using the PyWin32 interface. Note that the following scripts only work on Windows. One advantage of xlrd and xlwt is that you can use them on any platform. Let’s take a look at a simple example, shall we? import time import win32com.client as win32 #---------------------------------------------------------------------- def excel(): """""" xl = win32.gencache.EnsureDispatch('Excel.Application') ss = xl.Workbooks.Add() sh = ss.ActiveSheet xl.Visible = True time.sleep(1) sh.Cells(1,1).Value = 'Hacking Excel with Python Demo' time.sleep(1) for i in range(2,8): sh.Cells(i,1).Value = 'Line %i' % i time.sleep(1) ss.Close(False) xl.Application.Quit() if __name__ == "__main__": excel() The example above is similar to what you will normally find on the web. It is actually based on an example I saw in Wesley Chun’s excellent book, Core Python Programming. Let’s take some time and unpack the code. To gain access to Excel, we import win32com.client and then call its gencache.EnsureDispatch, passing in the application name that we want to open. In this case, the string to pass is “Excel.Application”. All that does is open Excel in the background. At this point, the user won’t even know Excel is open unless they have Task Manager running. The next line is adds a new workbook to Excel by calling that Excel instance’s “Workbooks.Add()” method. This returns a sheets object (I think). To get the ActiveSheet, we call ss.ActiveSheet. Finally, we make the Excel program itself visible by setting that property to True. To set a specific cell’s value, call something like this: sh.Cells(row,col).Value = “some value”. Note that our instance is NOT zero-based and will actually put the value in the correct row/col combo. If we want to extract a value, we just remove the equals sign. What if we want the formula? To figure this our, I recorded a macro in Excel and did a Paste Special command that only pasted the Formula. Using the code generated, I figured out that to get the formula in Python, you just do this: formula = sh.Cells(row, col).Formula What if you need to change which sheet you’re on? Recording a macro also showed me how to accomplish this feat. Here’s the VBA code from Excel: Sub Macro1() ' ' Macro1 Macro ' Sheets("Sheet2").Select End Sub From this code, I gathered that I needed to call my sheets object’s “Sheets” method and after a little fiddling, I got it to work by doing the following: sheet2 = ss.Sheets("Sheet2") Now we have a handle on the second sheet in the workbook. If you want to edit or retrieve values, just prepend the same methods you used above with whatever you called the sheet2 instance (i.e. sheet2.Cells(1,1).Value). The last two lines from the original program will close the sheets and then quit the entire Excel instance. You may be thinking that so far all I’ve shown is how to create a new document. What if you want to open an existing file? Just do something like this at the beginning of the code: xl = win32.gencache.EnsureDispatch('Excel.Application') ss = xl.Workbooks.Open(filename) And there you have it! You now know the basics of hacking Excel with Python using Excel’s COM object model. If you need to learn more, I recommend trying to record a macro and than translating the result into Python. Note: I was unable to find an example that could save the spreadsheet…there are several examples that claim they work, but they didn’t for me. Python and Microsoft Word Accessing Microsoft Word with Python follows the same syntax that we used for Excel. Let’s take a quick look at how to access Word. from time import sleep import win32com.client as win32 RANGE = range(3, 8) def word(): word = win32.gencache.EnsureDispatch('Word.Application') doc = word.Documents.Add() word.Visible = True sleep(1) rng = doc.Range(0,0) rng.InsertAfter('Hacking Word with Python\r\n\r\n') sleep(1) for i in RANGE: rng.InsertAfter('Line %d\r\n' % i) sleep(1) rng.InsertAfter("\r\nPython rules!\r\n") doc.Close(False) word.Application.Quit() if __name__ == '__main__': word() This particular example is also based on something from Chun’s book as well. However, there are lots of other examples on the web that look almost exactly like this too. Let’s unpack this code now. To get a handle on the Microsoft Word application, we call win32.gencache.EnsureDispatch(‘Word.Application’); then we add a new document by calling the word instance’s Documents.Add(). If you want to show the user what you’re up to, you can set the visibility of Word to True. If you want to add text to the document, then you’ll want to tell Word where you want the text to go. That’s where the Range method comes in. While you can’t see it, there is a “grid” of sorts that tells Word how to layout the text onscreen. So if we want to insert text at the very top of the document, we tell it to start at (0,0). To add a new line in Word, we need to append “\r\n” to the end of our string. If you don’t know about the annoyances of line endings on different platforms, you should spend some time with Google and learn about it so you don’t get bit by weird bugs! The rest of the code is pretty self-explanatory and will be left to the reader to interpret. We’ll move on to opening and saving documents now: # Based on examples from word.Documents.Open(doc) word.ActiveDocument.SaveAs("c:\\a.txt", FileFormat=win32com.client.constants.wdFormatTextLineBreaks) Here we show how to open an existing Word document and save it as text. I haven’t tested this one fully, so your mileage may vary. If you want to read the text in the document, you can do the following: docText = word.Documents[0].Content And that ends the Python hacking lesson on Word documents. Since a lot of the information I was finding on Microsoft Word and Python was old and crusty and didn’t seem to work half the time, I don’t add to the mess of bad information. Hopefully this will get you started on your own journey into the wild wonders of Word manipulation. Further Reading - Python for Windows Examples - Opening MS Word files via Python thread - Recipe 279003: Converting Word documents to text - Dzone: script excel from Python - Python-Excel website, mailing list
https://www.blog.pythonlibrary.org/2010/07/16/python-and-microsoft-office-using-pywin32/
CC-MAIN-2021-31
refinedweb
1,186
66.74
The select package While tinkering on a project, I frequently found myself having to make FFI calls to select(2). This package provides an interface to that system call.. TODO: Return the error code specified by errno if select fails. Possibly move the select retry code from C to Haskell. Reorder functions to make the documentation more logical. Use hsc2hs's #const to get preprocessor constants (removes some of FdSet's cbits.c). NOTE: I feel I'm occupying prime namespace realestate with a package name like select. I'll happily let myself be chased away if someone more qualified wants to use this package name. Let me know. - No changelog available Properties Modules Downloads - select-0.4.tar.gz [browse] (Cabal source package) - Package description (included in the package) Maintainers' corner For package maintainers and hackage trustees
https://hackage.haskell.org/package/select-0.4
CC-MAIN-2015-11
refinedweb
138
59.7
. . . . .2. . . . . . . . . . . . . . . . . .2. . . . . . . . . . . . .4. . . . . . . . 2. . . . . . . .2 2. . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SMTP Preprocessor . . . . . . . . . . . .3. . . . . . . . . . . . . . . . . . . . . . Performance Monitor . . . . . . . . .11 SSL/TLS . . . . . . . . . . . . . . . . . . .9.4 2. . .9. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3. . . . . . . . . .4. . . . . . . . . . . . . . . . Obfuscating IP Address Printouts . . . . .3 2. . 101 Event Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . .1 2. . . . . . . . . . . . . .2. . . . . . . .9. . . . . . . . . . . . . . . . . . . . .4 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Running in Rule Stub Creation Mode .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 2. . 2. . . . . 2. .9 Frag3 . . . .2 Format .13 DCE/RPC 2 Preprocessor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2. . . . . . . . . 103 Event Suppression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . sfPortscan . . . . . . . . . . . . 2. . . . . . .2 2. . . . . . . . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Variables . . . . . .1 Includes . . 108 3 . . . . . . . . . . . . . . . . . . . Config . 23 23 24 24 24 25 26 26 26 26 29 37 38 41 46 51 52 55 66 69 76 77 78 80 80 94 96 99 1. . . . . . . . SSH . . . . . . . . . . . . . . . . . . . . . . . .3 Decoder and Preprocessor Rules . Stream5 . FTP/Telnet Preprocessor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. . . . . . . . . . .2 Rule Profiling .5 Performance Profiling . . . . . . . . . . . . . .5. . . . . . . . . . . .5 Running Snort as a Daemon . RPC Decode . . . . .5 2. . . . . . . . . . .4 Configuring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 1. . . . . . . . . . . . . . 100 Event Processing . . . . . . . 106 2. . . . . . . . . . . . . . . . . . . . . . . . . . 105 Event Logging . . . . . .2. . . . . .14 Sensitive Data Preprocessor .10 DNS . . . . . . . . . Specifying Multiple-Instance Identifiers . 2. . . . . . . . . . . . . . . . 100 Reverting to original behavior . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . .1 1. . . . . . . . . . . . . 2. . . . . . . . . . . . . . . . . . .1. 2. . . . . . . . . . . . . . .2 2. . . . . . . . . . . . . . . . . . 2. .6 2. . . . . . . . . .10 More Information . . . . . 107 2. . . . . 101 2. . . . . . .3 1. . . . . . . . .2. . . . . . . . . . . . . . . .2. Snort Modes . . . .1. . . . . . . . . . . . HTTP Inspect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 2. . . . . .9. . . .12 ARP Spoof Preprocessor . . . . . . . . . . . . . . . . . .8 2. . . . . . . . . . . .1. . . . . . . . . . . . . . . . . . . . . .9. . . . . . . . . . . . . . . . . . . .1 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 2. . .2. . . . . . . . . . . . . . . . . . . . . . . . .2. . . 2. . . . . . . . . . . . . . . . . . . . . . . . . . . .1 2. . . . . . . . . . . . . . . . . . . . . . .15 Normalizer . . .1 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 2. . . . . . . . . . . . . . . . . . .5. . . . . . . .4 Rate Filtering . . . . . . . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Configuring Snort 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Preprocessor Profiling . . . . . . . Preprocessors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.2 2. . . . . . . . 127 2. . . . . . . . . . . 133 2. . . . . . . . . . .2 Format . 130 2. . . . . . . . . . . . . . . . .10. . . . . . . . . . . . . . . . . . 132 2. . . . . . . . . . . . . .2 Configure Sniping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6. . . . . . . . . . . . . . . . . . .7. .6. . . . . . . . . . . . . 134 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 2. . . . .9 alert syslog . . . . . . . . . . . . . . . . . . . . . . . . . .8. . . . 133 2. . . . 119 unified . . . . . . . . . . .6.6. . . . . . . . . . . .3 Enabling support .11. . . . . . . . . . . . . . . . . . . . . . .5 Rule Actions .13 Log Limits . . . . . 122 2. . . . . . . . . . . . . . . . . . . . . . .2 2. . . . . . . 133 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 2. . .3 2. . . .6. . .6. . . . . . . . . . . . . 111 Output Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 How Configuration is applied? . . . . . . . . . . . . . . . . . . . . . . . .2 2. . . . . . . . . . . . . . . . . . . .1 2. . . . . . . . . . . . . . . .6. .3 2. . . . . . . . . . . . . . . . . . . . . . . . . . 120 unified 2 . . . . . . . . . 123 Attribute Table Example . . . . .11 log null . . . . . . . . . . . . . . . . . .1 Enabling Active Response .6. . . . . . . . . . 127 Directives . . . . .11.8 Configuration Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7. . . . . . . . . . . .11. . . . . . . . . . . . . .6 2. . . . . . . . . . . . . . .1 Creating Multiple Configurations . . . . . . . . . . . . . . . .8 2. . . . . . . . . . . . . . . . . . . . . . . .11 Active Response . . . . . . . . 117 database . . . . . .4 2. . .10. . . . . . . .9. . . . . . . . . . . . . . . . 122 2. . . . .12 alert aruba action . . . . . 130 2. . . .1 2. . . . . . . . . . . . . . . . . . . . . .5. . . . . . . . . . . . . . . . . . . . . . .4 React . . . . . . . . . . . . . 116 alert full . . . . .9 Reloading a Snort Configuration . . . 128 Reloading a configuration . . . . . . . . . . . . . . . . .2 Configuration Specific Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 2. . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 Packet Performance Monitoring (PPM) . 132 2. . . . . . . . . .3 Flexresp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Attribute Table File Format . . . . . . . . . . . . . . . . . . .6. . . . . . . . . . . . .10 alert prelude . . . . . . . . . . . . . . . . . .11. . . . . . . . .6. . . .11. . . . . . . . . . . . . . . . . . . . . . . . . .6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 alert fast . . . . . . . . 120 2. . . . . . . . 117 log tcpdump . . . . . . . . . 125 Dynamic Modules . . . . . . . . . . . . . . . . . . . . . . . .5 2. . . . . . . . . . . . . . . . . . . . . . . . . 123 2. . . 114 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10. . . . . 127 2. . . . . .6. . . .1 2. . . . . . . . . . . . . . . . . . . . .2 2. . . . . . . . . . 128 Non-reloadable configuration options . . . . . . . . . . . . . .9. . . . . . . . . . . . . . . . . . 117 csv . . 130 2. . . . . . . . . . . . . . . 116 alert unixsock . . . . . . . . . . . . . . . . . .6. . . . . . . . . . . . . . . . . . .10 Multiple Configurations . . . . . . . . . . . . . . .7 2. . . . . . 132 2. . . . . . . . . .7. . . . . . . . . . . . . . . . . . . . .9. . . . . . . . . . . . . . . . . . . . . .7 Host Attribute Table . . . . . .8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 2. . . . . . . . . . . 121 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 http raw cookie . . . .5. . . . . . 140 rev . . . . . . . . . . . . 143 General Rule Quick Reference . . . . . . . . . .5. . . . . . .5. . . . .5. . . . . . 135 Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . 152 5 . .2 3. . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. . 150 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 Port Numbers . . . . . . . . . . . . . . . . . . . . . . .4 3. . . . . . . . . . .3 3. . .5. . .4 3. . . . .2. . . . . . . 141 classtype . . . . . . . . . 138 3. . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . . . .6 3. . . . . . . 135 Rules Headers . . . . . . . . . . .4. . . .5. . . . . . . . . . . . . . . .3 3. . . . . . . . . . . . . . . . . . . . . . . . . . . .2 135 The Basics . . .5 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12 http raw header . . . . . . . . . . . . 139 3. . . . . . 149 3. . . . . . . . . . . 147 within . . . 148 3. . 138 General Rule Options . . . . . . . .3 3. . . . . . . . . . . . . . . . . . . .5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 nocase . . . . . .8 3. . . . . . .9 content . . . . . . . . . . 139 gid . . . . . . . . . . . . . . . . . . . . 140 sid . . . .17 http stat msg . . . . . . . . . . . . . . . . . . . .4. . . . . . . . . . .3 Writing Snort Rules 3. . . .6 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . . . . . . .13 http method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5. . . . . . . . . . . . . . . . . . . . . . . . . .7 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Rule Options . . . . . . . . . . .16 http stat code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 3. . . . . . . . .5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Activate/Dynamic Rules . . . . . . . . . . . . . . . . . . . . . . . . .5.9 msg . . . . . . . . . . . .11 http header . . . . 149 3. . . . . . . . . . 137 The Direction Operator .3 3. . . . . 145 depth . . . . . . . . . . . . . . .5. . . .15 http raw uri . . . . . . 143 3. . . . . . . . . . . . .5. . . . . . . . . . . . . .7 3. . . . . . . . . . . . . . . . . . . . 146 distance . . . . .5 3. . . . . . . . . . . . . . . . . . .1 3. . . . . . . . . . . . . . . . . . 148 http cookie .1 3. . . . . . . . . . . . . . . . . . . . . . . . . . .1 3. . . . . . . . . . . . . . . . . . . . . . . . .5 Payload Detection Rule Options . .4. . . . .4. . . . . . . . . . . . .5. . . 150 3. . . . . . . . . . . . . . . .1 3. . . .2. .5. . . . . . . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 3. . . . . 144 3. . . . . . . . . . . . . . . . . . . . . . . . . . 145 rawbytes . . . . . . . . . 147 http client body . . . . . . . . . . . . . .8 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5. . . 141 priority . . . . . . . . . . . . . . . . . . . . . . . . . . .5 3. . . . . . . . . . . . . . . . 139 reference . . . . . . . . . . . . . . . . . . . . . . . . .4. . . . . . . . . . . . . . . . .14 http uri . . . . . . . . . . . . . . . . . .2 3. 142 metadata . . . . . . . . . . . . . 146 offset . . . . . . . . . . . . . 135 3. . . . . . . . . . . . . . . . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 Rule Actions . . . . . . . . . . . . . . . . . . . . . . . . 152 3. . . . . . . . . . . . . . .18 http encode . . . . . . . . 136 IP Addresses .2. . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . .2 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5. . . .4 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5. . . . . . . . . . . . . . .5. . . . . . . . . . . . . . . . . . 172 6 . . . . . . . . . . . . . . 157 3. . . . . . . . . . . . . . . . . . . . . . . . . . . .5. . . . . . . . . . . . . . . . . . . . . . . . .6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27 byte test . . . . . . . . . . .30 ftpbounce . . . . . . . . . . . . . . . . . . . . . . . . . . . .23 pcre . . . . . . . . 153 3. . . . . . . .3. . . . . . . . . . . . . . .6. . . 167 fragbits . . . .20 uricontent . . . . . . . . .6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14 itype . . . . . . . . . . .1 3. 164 3. . . . . . . .31 asn1 . . . . .22 isdataat . . . . .33 dce iface . . . . . . . . . . . . . . . . . .3 3. . . . . . 169 3. . . . . . . . 163 3. .37 ssl state . . . . . . 167 dsize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 flowbits . . . . . 170 3. . . .5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 3. . . . . . 158 3. . . . . . . . . . . . 164 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5.6. . . . . . . . . . .6. . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 id . . . . . .5. . . . . . . . . . . . . . . . . . . . . . . . . .11 seq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21 urilen . . . . . . . .2 3.5. . . . . . . . . . . . . . . . . . . . . . . . . .6.8 3. . . . . . . . . . . . . . .6. . . . . . . . . . . . . . . . . . . . . .6 3. . .5. . . . . . . . . .5. . . . .34 dce opnum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6. . . . . . . . . . . . . . . . . . . . . . . . . . . 164 3.6. 160 3. . . .15 icode . . . . . . . . . . . . . . . . . . . . . . . .6. . . . . . . . . 154 3. . . . . . . . . . . . . . . . . . . . . . . 164 3. . . . . . . . . . . . . . . . .28 byte jump . . . . . . . . . . . 168 flow . . . . . .5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16 icmp id . . . . . . . . . . . .5.6. . . . . . . 162 3. . . . . . . . . . . . . . . . . . . . . . . .29 byte extract . . . . . . . . . .25 base64 decode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 3. . . . . . . . . . . . . . . . . . .6. . . . . . . . . . . . . . . . . . . . . . 163 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5. . . . .35 dce stub data . 165 ttl . . . .19 fast pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5. . . . . . . . . . . . . . . . . . . . . . . . . . . . .6. . . . . . . . . . . . .6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32 cvs .18 rpc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17 icmp seq . . . . . . . . . . . . . . . . . . . . . .36 ssl version . . . . .5. . . . . . . . 171 3. . . . . . . . . . . 166 ipopts . . . . . . . .6 Non-Payload Detection Rule Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5. . . . . . . . . . . . . .6. . 155 3. . . . . . . . . . .13 window . . . . . . . . . . . . 170 3. . . .6. . . . . . . . . . . . . . . . . . 156 3. . . . . . . . . . . . . . . .5. . . . . . . . . . . . . . .24 file data . . . . . . . . . . . . . . . . . . . .5. . . . . . 168 flags . . 171 3. . . . . . . . . . . . . .5. . . . . . . . .12 ack . . . .6. . . . . . . . . . . 170 3. 171 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26 base64 data .7 3.5. . . . . 164 3. . . . . . . . . . 169 3. . . . . . . . . . . . . . . . .5 3. . . . . . .5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 3. . . . . . . . . . . . . . . . . . . . . .6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 3. . . 156 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 fragoffset .5. . . . . . . . . . . . . . . . . . .38 Payload Detection Quick Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 3. . . 165 tos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7. . . . . . . . . . . . . . . . . . . . . 179 Testing Numerical Values . . . . . . . . .1. . . . . . . .9. . . .7. .7. . . . . . . . . . . . . . . . 177 Writing Good Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 Rule Thresholds . . . . . . . . . . 191 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 react . . . . . . . . . . . . . . . . . . . . . . . . . 172 3. .2 3. . . 176 3. . . . . . . . . . .3 3. . . .6. . . . . . . . .1. . . . . . 193 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 Detection Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 3. . . . . . . . . . .2. . 176 3. . . . . . . . . . . . . . . . . . 174 resp .3. . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 Catch the Vulnerability. . . . . . . . 174 3. . . . . . . . . . . . .1 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 Post-Detection Rule Options . . . . . . . . . . . . . . .7. . . . . . . . . . .1 4. . . . . . . 191 4. . . . Not the Exploit . . . . . . . .1. . . . . . . . . . . .19 ip proto . . . . . . .22 stream size . . . . . . .6 3. . . . . . . . . . . . . . . . . 177 3. . . . . . . . . . . .2. . . . . 180 183 Dynamic Modules 4. . . . . .7. . . .3 4. . . . . . . . . . . . . . . . . . . . . .3 Preprocessors . . . . . . . .5 DynamicPluginMeta . . . . 173 3. . . . . . . . . . . . .10 detection filter . . . . . . . . . .1 4. .1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7. . . . . . . . . . . . . . . 191 Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 4. . .7. . . . . . . .1 4. . . . . . . . . . . . . .7 3. . . . . . . 176 replace . . . . . . . . 176 count . . . . .1 3. . . . . . . . . . . . . . . . . . . . . . . . .3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20 sameip . . . . . . . . . . . .5 3. . . . . . . . . . . . .9.4 3. . . . . . . . . . . . . . . . . . . . .21 stream reassemble . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23 Non-Payload Detection Quick Reference . . . . . . . . . . .4 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 Dynamic Rules . . 183 4. .6. . . 176 activated by . . . . 175 tag . . . . . . 173 3. . . . . . . . . . . . . . . . . . . . . . 183 DynamicPreprocessorData . . . 173 3. . . . . . . . . . . . . . . . . . . . .7. .2 3. . . . . . . . .2 4. .2 Required Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3. . . . . . . . . . . . . . . . . . . 175 activates . . . . . . . . . . . . . . . . . . . . . . . . .6. . . . . .9. . . . . . . . . . . . . .9. . .9. . . . . 184 4. . . . . . . . . . . . . . . . . . . . 184 SFSnortPacket . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 4. . . . . 178 Catch the Oddities of the Protocol in the Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 4 Content Matching . . . . . . . . . . . . . . 190 Rules . . . . . . . . . . . . . . 178 Optimizing Rules . . .7. . . . . . . . . . . . . . . . . . . . . 183 DynamicEngineData . . . . . . . . . . . .2 Preprocessor Example . . . . . . . .3. .7. . . . . . . . . . . . . . . .11 Post-Detection Quick Reference . . . . .8 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6. . . . . . . . . . .1. . . . . . . . . . . . . .3 3. .7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 3. . . . . . . . . . . . . . . . . . . . .1 Data Structures . . . .9 logto . . . . . . . .2.8 3. . . . .4 4. . . . . . . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 5. . . . . . . . . . . . . . . . . . . . . . . 196 Detection Plugins . . . .5 Snort Development 5. . . . . . . . . . . 197 8 . 196 5. . . . . . . . . . .2. . . . . . . . . . . . . .2 196 Submitting Patches . . . . . . . . . . . . . . . . . . . . .1 5. . . . . . . . . . . . . . . . .2 5. . . . . . . . . . . 196 Output Plugins . .2. . . . . . . . . . . . . .3 Preprocessors . .1 5. . . . . . . .3 The Snort Team . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 Snort Data Flow . . . . . . . . . . . showing the data link layer headers.org>. sniffer mode). Small documentation updates are the easiest way to help out the Snort Project. there are a few basic concepts you should understand about Snort. • Packet Logger mode. which allows Snort to analyze network traffic for matches against a user-defined rule set and performs several actions based upon what it sees. drop us a line and we will update A it./snort -v This command will run Snort and just show the IP and TCP/UDP/ICMP headers. try the following: . This file aims to make using Snort easier for new users. which simply reads the packets off of the network and displays them for you in a continuous stream on the console (screen).1 Getting Started Snort really isn’t very hard to use./snort -vd This instructs Snort to display the packet data as well as the headers. Snort can be configured to run in three modes: • Sniffer mode. but there are a lot of command line options to play with. and it’s not always obvious which ones go together well. If you just want to print out the TCP/IP packet headers to the screen (i. If you would like to submit patches for this document. let’s start with the basics. try this: .2 Sniffer Mode First. If you want an even more descriptive display. you can find the latest version of the documentation in LTEX format in the Snort CVS repository at /doc/snort_manual.org> and now is maintained by the Snort Team. It was then maintained by Brian Caswell <bmc@snort.tex. nothing else. Before we proceed. 1. If you have a better way to say something or find that something in the documentation is outdated. 1.Chapter 1 Snort Overview This manual is based on Writing Snort Rules by Martin Roesch and further work from Chris Green <cmg@snort.e. which logs the packets to disk. If you want to see the application data in transit. the most complex and configurable configuration. do this: 9 . • Network Intrusion Detection System (NIDS) mode. /log.1./log Of course. not just sections of it./log -h 192. you can read the packets back out of the file with any sniffer that supports the tcpdump binary format (such as tcpdump or Ethereal). Additionally.1. you need to tell Snort which network is the home network: .168. these switches may be divided up or smashed together in any combination.0 class C network./snort -dev -l . you can try something like this: 10 . you need to specify a logging directory and Snort will automatically know to go into packet logger mode: ./snort -dev -l . which eliminates the need to tell it how to format the output directory structure../snort -vde (As an aside. but if you want to record the packets to the disk. Binary mode logs the packets in tcpdump format to a single binary file in the logging directory: .3 Packet Logger Mode OK.168. all of these commands are pretty cool. Packets from any tcpdump formatted file can be processed through Snort in any of its run modes. with the directory names being based on the address of the remote (non-192. if you wanted to run a binary log file through Snort in sniffer mode to dump the packets to the screen./snort -d -v -e and it would do the same thing. Once the packets have been logged to the binary file. in the case of a tie.1) host. the source address.) 1. All incoming packets will be recorded into subdirectories of the log directory. Snort will exit with an error message. We don’t need to specify a home network any longer because binary mode logs everything into a single file./snort -l . If you just specify a plain -l switch. In order to log relative to the home network. it collects every packet it sees and places it in a directory hierarchy based upon the IP address of one of the hosts in the datagram. The last command could also be typed out as: . and you want to log the packets relative to the 192. ! △NOTE both the source and destination hosts are on the home network.168. If you don’t. this assumes you have a directory named log in the current directory. you should consider logging in binary mode. you don’t need to run in verbose mode or specify the -d or -e switches because in binary mode the entire packet is logged. Snort can also read the packets back by using the -r switch../log -b Note the command line changes here.0/24 This rule tells Snort that you want to print out the data link and TCP/IP headers as well as application data into the directory .. which puts it into playback mode. they are logged to a directory Note that if with a name based on the higher of the two port numbers or. When Snort runs in this mode. For example. Sends alerts to a UNIX socket that another program can listen on. For example.log You can manipulate the data in the file in a number of ways through Snort’s packet logging and intrusion detection modes.4. if you only wanted to see the ICMP packets from the log file. as well as two logging facilities. Generates “cmg style” alerts. syslog. console. try this: ./snort -dvr packet. ./snort -d -h 192. These options are: Option -A fast -A full -A -A -A -A unsock none console cmg Description Fast alert mode.conf where snort.1 NIDS Mode Output Options There are a number of ways to configure the output of Snort in NIDS mode. 11 . fast. The screen is a slow place to write data to. and packets can be dropped while writing to the display. simply specify a BPF filter at the command line and Snort will only see the ICMP packets in the file: .0/24 -c snort.1. This is the default alert mode and will be used automatically if you do not specify a mode. cmg. and none. Alert modes are somewhat more complex.0/24 -l . Six of these modes are accessed with the -A command line switch. Turns off alerting.conf in plain ASCII to disk using a hierarchical directory structure (just like packet logger mode). The default logging and alerting mechanisms are to log in decoded ASCII format and use full alerts./log -h 192./log -c snort.1. Sends “fast-style” alerts to the console (screen).conf This will configure Snort to run in its most basic NIDS form.conf is the name of your snort configuration file. so you can usually omit the -e switch. If you don’t specify an output directory for the program. the -v switch should be left off the command line for the sake of speed. alert message. too. It’s also not necessary to record the data link headers for most applications. socket./snort -dv -r packet. logging packets that trigger rules specified in the snort.168.. source and destination IPs/ports. read the Snort and tcpdump man pages. Full alert mode.log icmp For more info on how to use the BPF interface. 1./snort -dev -l . There are seven alert modes available at the command line: full. This will apply the rules configured in the snort. One thing to note about the last command line is that if Snort is going to be used in a long term way as an IDS.4 Network Intrusion Detection System Mode To enable Network Intrusion Detection System (NIDS) mode so that you don’t record every single packet sent down the wire.168. 1.conf file to each packet to decide if an action based upon the rule type in the file should be taken. The full alert mechanism prints out the alert message in addition to the full packet headers. as well as with the BPF interface that’s available from the command line. it will default to /var/log/snort. Writes the alert in a simple format with a timestamp. There are several other alert output modes available at the command line. For example: . If you want to configure other facilities for syslog output./snort -c snort.conf 12 . see Section 2. use the output plugin directives in snort. This number is primarily used when writing signatures. If you want a text file that’s easily parsed. To disable packet logging altogether. it will usually look like the following: [**] [116:56:1] (snort_decoder): T/TCP Detected [**] The first number is the Generator ID.168.map.2 Understanding Standard Alert Output When Snort generates an alert message. For a list of preprocessor SIDs. For a list of GIDs. such as writing to a database./snort -c snort. The second number is the Snort ID (sometimes referred to as Signature ID). 1. Rule-based SIDs are written directly into the rules with the sid option. To send alerts to syslog.1. In this case. we know that this event came from the “decode” (116) component of Snort. This allows debugging of configuration issues quickly via the command line. In this case.0/24 1.conf -A fast -h 192.6. The third number is the revision ID. try using binary logging with the “fast” output mechanism. See Section 2. For output modes available through the configuration file./log -h 192. use the following command line to log to default (decoded ASCII) facility and send alerts to syslog: . use the -s switch. you need to use unified logging and a unified log reader such as barnyard. This will log packets in tcpdump format and produce minimal alerts. please see etc/gen-msg.conf -l . use the following command line to log to the default facility in /var/log/snort and send alerts to a fast alert file: .4.1. but still somewhat fast. as each rendition of the rule should increment this number with the rev option. this tells the user what component of Snort generated this alert./snort -b -A fast -c snort. For example. 56 represents a T/TCP event. The default facilities for the syslog alerting mechanism are LOG AUTHPRIV and LOG ALERT.Packets can be logged to their default decoded ASCII format or to a binary log file via the -b command line switch. please read etc/generators in the Snort source.3 High Performance Configuration If you want Snort to go fast (like keep up with a 1000 Mbps connection).4.6.conf.1 for more details on configuring syslog output. use the -N command line switch. ! △NOTE Command line logging options override any output options specified in the configuration file.0/24 -s As another example. This allows Snort to log alerts in a binary form as fast as possible while another program performs the slow actions.168. The DAQ replaces direct calls to PCAP functions with an abstraction layer that facilitates operation on a variety of hardware and software interfaces without requiring changes to Snort. for packet I/O. or Data Acquisition library. only the events for the first action based on rules ordering are processed.1 Configuration Assuming that you did not disable static modules or change the default DAQ type.5. Log rules are applied.4. you can select and configure the DAQ when Snort is invoked as follows: .4 Changing Alert Order The default way in which Snort applies its rules to packets may not be appropriate for all installations. • --treat-drop-as-alert causes drop and reject rules and any associated alerts to be logged as alerts. in which case you can change the default ordering to allow Alert rules to be applied before Pass rules. • --alert-before-pass option forces alert rules to take affect in favor of a pass rule. The sdrop rules are not loaded. then the Alert rules and finally. It is possible to select the DAQ type and mode when invoking Snort to perform PCAP readback or inline operation. while taking the actions based on the rules ordering. you can run Snort just as you always did for file readback or sniffing an interface. ! △NOTE Pass rules are special cases here. Without this option (default case) . ! △NOTE Sometimes an errant pass rule could cause alerts to not show up. etc. in that the event processing is terminated when a pass rule is encountered. then the Drop rules. 1. This allows use of an inline policy with passive/IDS mode.5 Packet Acquisition Snort 2. The Pass rules are applied first. regardless of the use of --process-all-events. • --process-all-events option causes Snort to process every event associated with a packet. rather then the normal action. Several command line options are available to change the order in which rule actions are taken. 1.9 introduces the DAQ. For more information. However. please refer to the --alert-before-pass option. gov) is the current maintainer of the libpcap implementation of the shared memory ring buffer. mode. the most recent version is selected. Note that if Snort finds multiple versions of a given library. Also./snort --daq pcap --daq-mode passive -i <device> .5. if snort is run w/o any DAQ arguments. as this appears to be the maximum number of iovecs the kernel can handle. -r will force it to read-file. libpcap is able to queue packets into a shared buffer that Snort is able to read directly.<mode> ::= read-file | passive | inline <var> ::= arbitrary <name>=<value> passed to DAQ <dir> ::= path where to look for DAQ module so’s The DAQ type. it will operate as it always did using this module. if configured in both places./snort -r <file> .gov/cpw/./snort --daq pcap --daq-mode read-file -r <file> You can specify the buffer size pcap uses with: . and directory may be specified either via the command line or in the conf file. and attributes of each. 1. the mode defaults to passive. and if that hasn’t been set.2 PCAP pcap is the default DAQ. According to Phil. You may include as many variables and directories as needed by repeating the arg / config. and if that hasn’t been set. This change speeds up Snort by limiting the number of times the packet is copied before Snort gets to perform its detection upon it. These are equivalent: . The shared memory ring buffer libpcap can be downloaded from his website at. . for a total of around 52 Mbytes of memory for the ring buffer alone. Instead of the normal mechanism of copying the packets from kernel memory into userland memory. Once Snort linked against the shared memory libpcap. MMAPed pcap On Linux./snort --daq pcap --daq-var buffer_size=<#bytes> Note that the pcap DAQ does not count filtered packets. this ends up being 1530 bytes per frame. PCAP FRAMES is the size of the ring buffer. Phil Woods (cpw@lanl. DAQ type may be specified at most once in the conf and once on the command line. This applies to static and dynamic versions of the same library./snort [--daq-list <dir>] The above command searches the specified directory for DAQ modules and prints type. -Q and –daq-mode inline are allowed. By using PCAP FRAMES=max. the command line overrides the conf. 14 . a modified version of libpcap is available that implements a shared memory ring buffer. by using a shared memory ring buffer./snort -i <device> . -Q will force it to inline. variable. libpcap will automatically use the most frames possible. If the mode is not set explicitly. enabling the ring buffer is done via setting the environment variable PCAP FRAMES. the maximum size is 32768. version.lanl. This feature is not available in the conf. but -Q and any other DAQ mode will cause a fatal error at start-up. since there is no conflict. On Ethernet. 65535. The smallest block size that can fit at least one frame is 4 KB = 4096 bytes @ 2 frames per block. etc. here’s why. default is IP injection <proto> ::= ip4 | ip6 | ip*. Actual memory allocated is 42366 * 4 KB = 165. 5..3 AFPACKET afpacket functions similar to the memory mapped pcap DAQ but no external library is required: . the numbers break down like this: 1. you must set device to one or more interface pairs.4 NFQ NFQ is the new and improved way to process iptables packets: .5 MB. 3. 15 .5. the afpacket DAQ allocates 128MB for packet memory. default is ip4 <qid> ::= 0. 4. The number of frames is 128 MB / 1518 = 84733. The frame size is 1518 (snaplen) + the size of the AFPacket header (66 bytes) = 1584 bytes. where each member of a pair is separated by a single colon and each pair is separated by a double colon like this: eth0:eth1 or this: eth0:eth1::eth2:eth3 By default. 1. You can change this with: --daq-var buffer_size_mb=<#MB> Note that the total allocated is actually higher. default is 0 Notes on iptables are given below. 2. Assuming the default packet memory with a snaplen of 1518. default is 0 <qlen> ::= 0.5./snort --daq afpacket -i <device> [--daq-var buffer_size_mb=<#MB>] [--daq-var debug] If you want to run afpacket in inline mode.65535..1. As a result./snort --daq nfq \ [--daq-var device=<dev>] \ [--daq-var proto=<proto>] \ [--daq-var queue=<qid>] \ [--daq-var queue_len=<qlen>] <dev> ::= ip | eth0. we need 84733 / 2 = 42366 blocks. 9 versions built with this: . .65535./snort --daq dump --daq-var file=<name> dump uses the pcap daq for packet acquisition.6 IPFW IPFW is available for BSD systems.5. 1.5 IPQ IPQ is the old way to process iptables packets./snort -i <device> -Q --daq dump --daq-var load-mode=passive 16 . Note that the dump DAQ inline mode is not an actual inline mode./snort -r <pcap> --daq dump By default a file named inline-out. Furthermore.1. default is IP injection <proto> ::= ip4 | ip6.5. etc./configure --enable-inline / -DGIDS Start the IPQ DAQ as follows: ./snort --daq ipfw [--daq-var port=<port>] <port> ::= 1. It replaces the inline version available in pre-2.9 versions built with this: .5./snort --daq ipq \ [--daq-var device=<dev>] \ [--daq-var proto=<proto>] \ <dev> ::= ip | eth0. default is ip4 Notes on iptables are given below. default is 8000 * IPFW only supports ip4 traffic./snort -r <pcap> -Q --daq dump --daq-var load-mode=read-file . You can optionally specify a different name../snort -J <port#> Instead. you will probably want to have the pcap DAQ acquire in another mode like this: . It therefore does not count filtered packets.7 Dump The dump DAQ allows you to test the various inline mode features available in 2. It replaces the inline version available in pre-2. ./snort -i <device> --daq dump . 1.pcap will be created containing all packets that passed through or were generated by snort. start Snort like this: ./configure --enable-ipfw / -DGIDS -DIPFW This command line argument is no longer supported: .9 Snort like injection and normalization. • Whitelist packets that caused Snort to allow a flow to pass w/o inspection by any analysis program. The default.6. This filter will apply to any --pcap-file or --pcap-dir arguments following. Option -r <file> --pcap-single=<file> --pcap-file=<file> --pcap-list="<list>" --pcap-dir=<dir> --pcap-filter=<filter> Description Read a single pcap.8 Statistics Changes The Packet Wire Totals and Action Stats sections of Snort’s output include additional fields: • Filtered count of packets filtered out and not handed to Snort for analysis. however. • Allow packets Snort analyzed and did not take action on. without this option. If reading multiple pcaps. Shell style filter to apply when getting pcaps from file or directory. Use --pcap-no-filter to delete filter for following --pcap-file or --pcap-dir arguments or specify --pcap-filter again to forget previous filter and to apply to following --pcap-file or --pcap-dir arguments. Can specify path to pcap or directory to recurse to get pcaps. • Replace packets Snort modified. Reset to use no filter when getting pcaps from file or directory. Same as -r.6 Reading Pcaps Instead of having Snort listen on an interface.1 Command line arguments Any of the below can be specified multiple times on the command line (-r included) and in addition to other Snort command line options. reset snort to post-configuration state before reading next pcap. you can give it a packet capture to read. A directory to recurse to look for pcaps. This can be useful for testing and debugging Snort. is not to reset state. The action stats show ”blocked” packets instead of ”dropped” packets to avoid confusion between dropped packets (those Snort didn’t actually see) and blocked packets (those Snort did not allow to pass). A space separated list of pcaps to read.6. Print a line saying what pcap is currently being read.pcap $ snort --pcap-single=foo.e. • Blacklist packets that caused Snort to block a flow from passing. eg due to a block rule.5. 1. File that contains a list of pcaps to read. Note. Added for completeness. • Ignore packets that caused Snort to allow a flow to pass w/o inspection by this instance of Snort.1. eg TCP resets. • Block packets Snort did not forward. Sorted in ASCII order.2 Examples Read a single pcap $ snort -r foo. 1. that specifying --pcap-reset and --pcap-show multiple times has the same effect as specifying them once. --pcap-no-filter --pcap-reset --pcap-show 1. i.pcap 17 . Snort will read and analyze the packets as if they came off the wire. • Injected packets Snort generated and sent. $ snort --pcap-filter="*.txt \ > --pcap-no-filter --pcap-dir=/home/foo/pcaps \ > --pcap-filter="*. foo2.txt foo1.pcap.pcap /home/foo/pcaps $ snort --pcap-filter="*.txt” (and any directories that are recursed in that file). so all files found under /home/foo/pcaps will be included.txt foo1. The addition of the second filter ”*. in other words.txt This will read foo1.cap" --pcap-dir=/home/foo/pcaps In the above.pcap /home/foo/pcaps $ snort --pcap-file=foo. Read pcaps under a directory $ snort --pcap-dir="/home/foo/pcaps" This will include all of the files under /home/foo/pcaps.pcap and foo3.cap” will cause the first filter to be forgotten and then applied to the directory /home/foo/pcaps. 18 .pcap foo2.pcap --pcap-file=foo.pcap and all files under /home/foo/pcaps.cap” will be included from that directory.txt \ > --pcap-filter="*.pcap" --pcap-file=foo. Read pcaps from a command line list $ snort --pcap-list="foo1. the first filter will be applied to foo.pcap" --pcap-dir=/home/foo/pcaps The above will only include files that match the shell pattern ”*. so all files found under /home/foo/pcaps will be included.txt. then no filter will be applied to the files found under /home/foo/pcaps.pcap foo2.cap" --pcap-dir=/home/foo/pcaps2 In this example.pcap" This will read foo1.pcap --pcap-file=foo.pcap” will only be applied to the pcaps in the file ”foo.Read pcaps from a file $ cat foo. $ snort --pcap-filter="*.pcap”. then no filter will be applied to the files found under /home/foo/pcaps.pcap foo3. $ snort --pcap-filter="*.txt \ > --pcap-no-filter --pcap-dir=/home/foo/pcaps In this example. any file ending in ”. the first filter will be applied to foo.txt.pcap.pcap --pcap-file=foo.pcap. the first filter ”*. then the filter ”*. foo2. Note that Snort will not try to determine whether the files under that directory are really pcap files or not.pcap”.txt $ snort --pcap-filter="*.pcap foo2.cap” will be applied to files found under /home/foo/pcaps2. so only files ending in ”. Using filters $ cat foo. but after each pcap is read. • Outstanding indicates how many packets are buffered awaiting processing. Printing the pcap $ snort --pcap-dir=/home/foo/pcaps --pcap-show The above example will read all of the files under /home/foo/pcaps and will print a line indicating which pcap is currently being read. The others are summarized below. it will be like Snort is seeing traffic for the first time.7 Basic Output Snort does a lot of work and outputs some useful statistics when it is done. Snort ran for 0 days 0 hours 2 minutes 55 seconds Pkts/min: 1858011 Pkts/sec: 21234 =============================================================================== 1. in which case it is shown per pcap. meaning all buffers will be flushed. It includes total seconds and packets as well as packet processing rates.1 Timing Statistics This section provides basic timing statistics. The way this is counted varies per DAQ so the DAQ documentation should be consulted for more info.7. The rates are based on whole seconds. statistics reset. 1. etc.2 Packet I/O Totals This section shows basic packet acquisition and injection peg counts obtained from the DAQ.000%) 19 . This does not include all possible output data. Example: =============================================================================== Packet I/O Totals: Received: 3716022 Analyzed: 3716022 (100. just the basics. • Injected packets are the result of active response which can be configured for inline or passive modes. Snort will be reset to a post-configuration state. minutes. • Filtered packets are not shown for pcap DAQs. If you are reading pcaps.7. unless you use –pcap-reset.Resetting state $ snort --pcap-dir=/home/foo/pcaps --pcap-reset The above example will read all of the files under /home/foo/pcaps. etc. and only shown when non-zero. the totals are for all pcaps combined. Many of these are self-explanatory. Example: =============================================================================== Run time for packet processing was 175.856509 seconds Snort processed 3716022 packets. 1. For each pcap. 000%) GRE VLAN: 0 ( 0.000%) GRE PPTP: 202 ( 0.005%) GRE ARP: 0 ( 0.000%) IP6/IP6: 0 ( 0.000%) ICMP-IP: 0 ( 0. Example: =============================================================================== Breakdown by protocol (includes rebuilt packets): Eth: 3722347 (100.000%) GRE Loop: 0 ( 0.817%) 20 .511%) Teredo: 18 ( 0.103%) ICMP6: 1650 ( 0.000%) IP4/IP6: 0 ( 0.000%) EAPOL: 0 ( 0.000%) GRE IP6: 0 ( 0.000%) IP4: 1782394 ( 47.685%) TCP: 1619621 ( 43.000%) GRE IPX: 0 ( 0.016%) IP6 Opts: 6168 ( 0.000%) Injected: 0 =============================================================================== 1.884%) Frag: 3839 ( 0. This traffic includes internal ”pseudo-packets” if preprocessors such as frag3 and stream5 are enabled so the total may be greater than the number of analyzed packets in the packet I/O section. • S5 G 1/2 is the number of client/server sessions stream5 flushed due to cache limit.044%) UDP6: 140446 ( 3.000%) ARP: 104840 ( 2.000%) GRE IP6 Ext: 0 ( 0.000%) MPLS: 0 ( 0.005%) GRE Eth: 0 ( 0.000%) VLAN: 0 ( 0. session timeout. • Disc counts are discards due to basic encoding integrity flaws that prevents Snort from decoding the packet.000%) Outstanding: 0 ( 0.000%) IP4/IP4: 0 ( 0.3 Protocol Statistics Traffic for all the protocols decoded by Snort is summarized in the breakdown section.044%) UDP: 137162 ( 3.7.000%) Filtered: 0 ( 0.103%) ICMP: 38860 ( 1.511%) IP6: 1781159 ( 47.000%) GRE: 202 ( 0.850%) IP6 Ext: 1787327 ( 48.Dropped: 0 ( 0. • Other includes packets that contained an encapsulation that Snort doesn’t decode.000%) GRE IP4: 0 ( 0.000%) IP6/IP4: 0 ( 0.166%) Frag6: 3839 ( 0.773%) TCP6: 1619633 ( 43. session reset. drop. this is done by the DAQ or by Snort on subsequent packets. • Blacklist = packets that caused Snort to block a flow from passing.040%) S5 G 2: 1654 ( 0. snort will block each packet and this count will be higher.000%) UDP Disc: 1385 ( 0. for example.7.000%) S5 G 1: 1494 ( 0. If the DAQ supports this in hardware.000%) Eth Disc: 0 ( 0. • Whitelist = packets that caused Snort to allow a flow to pass w/o inspection by any analysis program. Here block includes block.044%) Total: 3722347 =============================================================================== 1. These indicate potential actions that did not happen: • Match Limit counts rule matches were not processed due to the config detection: setting. Limits arise due to real world constraints on processing time and available memory.037%) ICMP Disc: 0 ( 0. ”Block” is used instead of ”Drop” to avoid confusion between dropped packets (those Snort didn’t actually see) and blocked packets (those Snort did not allow to pass). Verdicts are rendered by Snort on each packet: • Allow = packets Snort analyzed and did not take action on. The default is 3. 21 . Limits.000%) All Discard: 1385 ( 0.555%) Bad Chk Sum: 32135 ( 0. Like blacklist.002%) Eth Loop: 0 ( 0. log setting. • Ignore = packets that caused Snort to allow a flow to pass w/o inspection by this instance of Snort. • Replace = packets Snort modified. this is done by the DAQ or by Snort on subsequent packets.000%) IP4 Disc: 0 ( 0. alert. due to normalization or replace rules. max queue events max queue • Queue Limit counts events couldn’t be stored in the event queue due to the config event queue: setting. and reject actions. • Alerts is the number of activate.4 Actions.000%) IP6 Disc: 0 ( 0. and Verdicts Action and verdict counts show what Snort did with the packets it analyzed. no further packets will be seen by Snort for that session. eg due to a block rule. If not.037%) Other: 57876 ( 1. This information is only output in IDS mode (when snort is run with the -c <conf> option). The default is 5. • Log Limit counts events were not alerted due to the config event queue: • Event Limit counts events not alerted due to event filter limits. The default is 8.000%) TCP Disc: 0 ( 0.863%) Bad TTL: 0 ( 0.IPX: 60 ( 0. • Block = packets Snort did not forward. This can only happen in inline mode with a compatible DAQ. Like blacklist. and block actions processed as determined by the rule actions. This is the case when a block TCP rule fires. 8./configure --enable-ipv6 1.8 Tunneling Protocol Support Snort supports decoding of GRE.1 Multiple Encapsulations Snort will not decode more than one encapsulation. an extra configuration option is necessary: $ . e.000%) Replace: 0 ( 0.000%) Ignore: 0 ( 0.000%) Block: 0 ( 0. 1. one still needs to use the configuration option: $ ./configure --enable-gre To enable IPv6 support. IP in IP and PPTP. only the encapsulated part of the packet is logged.000%) =============================================================================== 1. Eth IP1 GRE IP2 TCP Payload gets logged as Eth IP2 TCP Payload 22 .2 Logging Currently.g.8.000%) Whitelist: 0 ( 0.000%) Match Limit: 0 Queue Limit: 0 Log Limit: 0 Event Limit: 0 Verdicts: Allow: 3716022 (100. To enable support.000%) Blacklist: 0 ( 0. Scenarios such as Eth IPv4 GRE IPv4 GRE IPv4 TCP Payload or Eth IPv4 IPv6 IPv4 TCP Payload will not be handled and will generate a decoder alert.000%) Logged: 0 ( 0.Example: =============================================================================== Action Stats: Alerts: 0 ( 0.000%) Passed: 0 ( 0. 23 . These rule stub files are used in conjunction with the shared object rules. In Snort 2. The PID file will be locked so that other snort processes cannot start. The path can be relative or absolute.conf -s -D Relative paths are not supported due to security concerns. which utilizes GRE and PPP. you might need to use the –dump-dynamic-rules option. Additionally. Use the --nolock-pidfile switch to not lock the PID file.2 Running in Rule Stub Creation Mode If you need to dump the shared object rules stub to a directory.0/24 \ -l /var/log/snortlogs -c /usr/local/etc/snort.6. the --create-pidfile switch can be used to force creation of a PID file even when not running in daemon mode. /usr/local/bin/snort -c /usr/local/etc/snort. the daemon creates a PID file in the log directory. for example: /usr/local/bin/snort -d -h 192.1.9. Snort PID File When Snort is run as a daemon . you must specify the full path to the Snort binary when you start it. the --pid-path command line switch causes Snort to write the PID file in the directory specifi.1 Running Snort as a Daemon If you want to run Snort as a daemon. 1. Please notice that if you want to be able to restart Snort by sending a SIGHUP signal to the daemon.9.168.conf \ --dump-dynamic-rules=/tmp This path can also be configured in the snort.9 Miscellaneous 1. 1. is not currently supported on architectures that require word alignment such as SPARC.and Eth IP1 IP2 TCP Payload gets logged as Eth IP2 TCP Payload ! △NOTE Decoding of PPTP. you can the add -D switch to any combination described in the previous sections. The drop rules will be loaded and will be triggered as a Wdrop (Would Drop) alert. either on different CPUs. This switch obfuscates your IP addresses in packet printouts./usr/local/bin/snort -c /usr/local/etc/snort. Explanation of Modes • Inline When Snort is in Inline mode.0/24 1. you could use the following command to read the packets from a log file and dump them to the screen.9.4. or on the same CPU but a different interface. the -G command line option was added that specifies an instance identifier for the event logs.log -O -h 192. For example.conf: config dump-dynamic-rules-path: /tmp/sorules In the above mentioned scenario the dump path is set to /tmp/sorules. it acts as an IPS allowing drop rules to trigger. This option can be used when running multiple instances of snort.3 Obfuscating IP Address Printouts If you need to post packet logs to public mailing lists. This is also supported via a long option --logid.4 Specifying Multiple-Instance Identifiers In Snort v2. it acts as a IDS. obfuscating only the addresses from the 192. Users can specify either a decimal value (-G 1) or hex value preceded by 0x (-G 0x11). inline. Snort can be configured to passive mode using the snort config option policy mode as follows: config policy_mode:tap • Inline-Test Inline-Test mode simulates the inline mode of snort.168.1. Snort can be configured to run in inline-test mode using the command line option (–enable-inline-test) or using the snort config option policy mode as follows: 24 .conf \ --dump-dynamic-rules snort. Each Snort instance will use the value specified to generate unique event IDs./snort -d -v -r snort.9.1.5 Snort Modes Snort can operate in three different modes namely tap (passive). This is useful if you don’t care who sees the address of the attacking host. 1. Snort policies can be configured in these three modes too. This is handy if you don’t want people on the mailing list to know the IP addresses involved.9.168.0/24 class C network: . and inline-test. You can also combine the -O switch with the -h switch to only obfuscate the IP addresses of hosts on the home network. Drop rules are not loaded (without –treat-drop-as-alert). allowing evaluation of inline behavior without affecting traffic. you might want to use the -O switch.. so you may have to type snort -\? instead of snort -? for a list of Snort command line options.theaimsgroup.snort --enable-inline-test config policy_mode:inline_test ! △NOTE Please note –enable-inline-test cannot be used in conjunction with -Q. so sit back with a beverage of your choosing and read the documentation and mailing list archives. ! △NOTE In many shells.: More Information Chapter 2 contains much information about many configuration options available in the configuration file. There’s a lot to Snort.com/?l=snort-users at snort-users@lists. a backslash (\) is needed to escape the ?.sourceforge. 25 . 0/24] alert tcp any any -> $MY_NET $MY_PORTS (flags:S.1 Format include <include file path/name> ! △NOTE is no semicolon at the end of this line. Note: ’ipvar’s These are simple substitution variables set with the var. Without IPv6 support. reading the contents of the named file and adding the contents in the place where the include statement appears in the file.1.1.1.1.) include $RULE_PATH/example.0/24.168.1. msg:"SYN packet".rule 26 . See Section 2.Chapter 2 Configuring Snort 2.2 for more information on defining and using variables in Snort config files.1 Includes The include keyword allows other snort config files to be included within the snort.2 Variables Three types of variables may be defined in Snort: • var • portvar • ipvar ! △NOTE are only enabled with IPv6 support.conf indicated on the Snort command line. 2. or portvar keywords as follows: var RULES_PATH rules/ portvar MY_PORTS [22. use a regular ’var’. Note that there Included files will substitute any predefined variable values into their own variable references. 2.1.1024:1050] ipvar MY_NET [192. It works much like an #include from the C programming language.80. ipvar.10. 1.2 and 2.1.1] Nonsensical negations: ipvar EXAMPLE [1.3]] alert tcp $EXAMPLE any -> any any (msg:"Example". Using ’var’ for an IP variable is still allowed for backward compatibility.1.![2.1.1. If IPv6 support is enabled.2. Lists of ports must be enclosed in brackets and port ranges may be specified with a ’:’.1.) Different use of !any: ipvar EXAMPLE !any alert tcp $EXAMPLE any -> any any (msg:"Example".2.2. ’any’ will specify any ports.2. Valid port ranges are from 0 to 65535.1.1 and IP from 2.2.2.!1.) alert tcp [1. IPs.0 to 2.2. IP lists now OR non-negated elements and AND the result with the OR’ed negated elements.2. Previously.2.2.3]] The order of the elements in the list does not matter.2.2.2.) The following examples demonstrate some invalid uses of IP variables and IP lists.1.2.0/24.1.![2. as a CIDR block.x and earlier.0/8. but it will be deprecated in a future release. such as in: [10:50.2.1.1.2. in a list.sid:3. ipvar EXAMPLE [1. each element in a list was logically OR’ed together. but ’!any’ is not allowed. The element ’any’ can be used to match all IPs.1.IP Variables and IP Lists IPs may be specified individually.2. although ’!any’ is not allowed. or lists may all be negated with ’!’.2.0.2.0/16] Port Variables and Port Lists Portlists supports the declaration and lookup of ports and the representation of lists and ranges of ports. IP variables should be specified using ’ipvar’ instead of ’var’. with the exception of IPs 2. and CIDR blocks may be negated with ’!’. See below for some valid examples if IP variables and IP lists. Use of !any: ipvar EXAMPLE any alert tcp !$EXAMPLE any -> any any (msg:"Example".!1.1. negated IP ranges that are more general than non-negated IP ranges are not allowed. Also.0.0/24] any -> any any (msg:"Example".7.2. The following example list will match the IP 1.sid:3.2.1. IP lists. Variables.888:900] 27 .0/24.) Logical contradictions: ipvar EXAMPLE [1.2.2. ranges.3. or any combination of the three. Negation is handled differently compared with Snort versions 2.1.0/24. [1.1.0.2. Also.sid:2.!1.2. sid:1.1.2.255.1. You can define meta-variables using the $ operator.100:200] alert tcp any $EXAMPLE1 -> any $EXAMPLE2_PORT (msg:"Example".9999:20000] (msg:"Example". sid:4. sid:5.91:95.) alert tcp any $PORT_EXAMPLE2 -> any any (msg:"Example". sid:3.) Variable Modifiers Rule variable names can be modified in several ways. sid:2. provided the variable name either ends with ’ PORT’ or begins with ’PORT ’. as described in the following table: 28 . These can be used with the variable modifier operators ? and -. For backwards compatibility. The use of ’var’ to declare a port variable will be deprecated in a future release.) Port variable used as an IP: alert tcp $EXAMPLE1 any -> any any (msg:"Example". portvar EXAMPLE1 80 var EXAMPLE2_PORT [80:90] var PORT_EXAMPLE2 [1] portvar EXAMPLE3 any portvar EXAMPLE4 [!70:90] portvar EXAMPLE5 [80. a ’var’ can still be used to declare a port variable.) Several invalid examples of port variables and port lists are demonstrated below: Use of !any: portvar EXAMPLE5 !any var EXAMPLE5 !any Logical contradictions: portvar EXAMPLE6 [80. The following examples demonstrate several valid usages of both port variables and port lists.Port variables should be specified using ’portvar’.) alert tcp any 90 -> any [100:1000. sid:1.!80] Ports out of range: portvar EXAMPLE7 [65536] Incorrect declaration and use of a port variable: var EXAMPLE8 80 alert tcp any $EXAMPLE8 -> any any (msg:"Example". 1. port variables can be defined in terms of other port variables.Variable Syntax var $(var) or $var $(var:-default) $(var:?message) Description Defines a meta-variable. Replaces with the contents of variable var or prints out the error message and exits. Here is an example of advanced variable usage in action: ipvar MY_NET 192.0/24 log tcp any any -> $(MY_NET:?MY_NET is undefined!) 23 Limitations When embedding variables. Replaces with the contents of variable var.90] Invalid embedded variable: var pvar1 80 portvar pvar2 [$pvar1. They should be renamed instead: Invalid redefinition: var pvar 80 portvar pvar 90 2.1.168.90] Likewise. Format config <directive> [: <value>] 29 . but old-style variables (with the ’var’ keyword) can not be embedded inside a ’portvar’. For instance. types can not be mixed. Valid embedded variable: portvar pvar1 80 portvar pvar2 [$pvar1. Replaces the contents of the variable var with “default” if var is undefined. Tell Snort to dump basic DAQ capabilities and exit. noudp. notcp. noudp. You can also preceed this option with extra DAQ directory options to look in multiple directories.2 for a list of classifications.31 for more information and examples. See the DAQ distro README for possible DAQ variables. The DAQ with the highest version of the given type is selected if there are multiple of the same type (this includes any built-in DAQs). Selects the type of DAQ to instantiate. noip. Types of packets to drop if invalid checksums. or read-file. Values: none. Specifies the maximum number of nodes to track when doing ASN1 decoding. Forks as a daemon (snort -D). udp. Tell Snort where to look for available dynamic DAQ modules. The selected DAQ will be the one with the latest version. tcp. Set a DAQ specific variable. noicmp. ip. 30 . Snort just passes this information down to the DAQ. Specify disabled to disable loading rules. Specifies BPF filters (snort -F). inline. Default (with or without directive) is enabled. Select the DAQ mode: passive. See the DAQ distro README for possible DAQ modes or list DAQ capabilities for a brief summary. Values: none. notcp. This can be repeated.5. You can optionally specify a directory to include any dynamic DAQs from that directory. ip. noip. Not all DAQs support modes. tcp. icmp or all (only applicable in inline mode and for packets checked per checksum mode config option). Types of packets to calculate checksums. See Section 3. Global configuration directive to enable or disable the loading of rules into the detection engine. noicmp. Decodes Layer2 headers (snort -e). Sets the alerts output file. See Table 3. Chroots to specified dir (snort -t). udp. this option will cause Snort to revert back to it’s original behavior of alerting if the decoder or preprocessor generates an event). icmp or all. If Snort was configured to enable decoder and preprocessor rules. high performance). moderate performance) 31 .Matches are queued until the fast pattern matcher is finished with the payload. high performance).Low Memory Keyword Trie (low memory. high performance) ∗ acs . best performance).Aho-Corasick SparseBanded (high memory. ∗ ac-bnfa and ac-bnfa-q . ∗ ac and ac-q . moderate performance) ∗ ac-banded . best performance).Aho-Corasick Standard (high memory. This was found to generally increase performance through fewer cache misses (evaluating each rule would generally blow away the fast pattern matcher state in the cache).Aho-Corasick Binary NFA (low memory.The ”nq” option specifies that matches should not be queued and evaluated as they are found.Aho-Corasick Sparse (high memory.Intel CPM library (must have compiled Snort with location of libraries to enable this) – No queue search methods .Aho-Corasick Full (high memory. high performance) ∗ lowmem and lowmem-q .Aho-Corasick Full with ANYANY port group evaluated separately (low memory. This is the default search method if none is specified.config detection: <method>] [search-method Select type of fast pattern matcher algorithm to use. ∗ lowmem-nq . ∗ ac-nq . moderate performance) ∗ ac-split . then evaluated. Note this is shorthand for search-method ac. split-any-any ∗ intel-cpm .Aho-Corasick Full (high memory. • search-method <method> – Queued match search methods . moderate performance) – Other search methods (the above are considered superior to these) ∗ ac-std .Aho-Corasick Binary NFA (low memory. moderate performance) ∗ ac-sparsebands .Low Memory Keyword Trie (low memory. ∗ ac-bnfa-nq .Aho-Corasick Banded (high memory. config detection: [split-any-any] [search-optimize] [max-pattern-len <int>] Other options that affect fast pattern matching. • split-any-any – A memory/performance tradeoff. Default is not to optimize. i. thus potentially reducing performance.. however CPU cache can play a part in performance so a smaller memory footprint of the fast pattern matcher can potentially increase performance. But doing so may require two port group evaluations per packet . Not putting the ANY-ANY port rule group into every other port group can significantly reduce the memory footprint of the fast pattern matchers if there are many ANYANY port rules. Default is to not set a maximum pattern length. By default. • max-pattern-len <integer> – This is a memory optimization that specifies the maximum length of a pattern that will be put in the fast pattern matcher.e.. Default is not to split the ANY-ANY port group. 32 . rules that will be evaluated because a fast pattern was matched. Note that this may cause more false positive rule evaluations. When used with ac-bnfa. some fail-state resolution will be attempted. Of note is that the lower memory footprint can also increase performance through fewer cache misses.one for the specific port group and one for the ANY-ANY port group. Useful when there are very long contents being used and truncating the pattern won’t diminish the uniqueness of the patterns. potentially increasing performance. Patterns longer than this length will be truncated to this length before inserting into the pattern matcher. but eventually fail. Not recommended. • no stream inserts – Specifies that stream inserted packets should not be evaluated against the detection engine.config detection: [no stream inserts] [max queue events <int>] [enable-single-rule-group] [bleedover-port-limit] Other detection engine options.. Default is to inspect stream inserts. Default is 5 events. • enable-single-rule-group – Put all rules into one port group. 33 . Default is not to do this. Default is 1024. • max queue events <integer> – Specifies the maximum number of events to queue per packet. [debug-print-nocontent-rule-tests] [debug-print-rule-group-build-details] • debug [debug-print-rule-groups-uncompiled] – Prints fast pattern information for a particular port [debug-print-rule-groups-compiled] group. • debug-print-rule-group-build-details – Prints port group information during port group compilation. Turns off alerts generated by experimental TCP options. • debug-print-rule-groups-compiled – Prints compiled. • bleedover-warnings-enabled – Prints a warning if the number of source or destination ports used in a rule exceed the bleedover-port-limit forcing the rule to be moved into the ANY-ANY port group. Dumps application layer (snort -d). Turns off alerts generated by T/TCP options. config enable decode oversized alerts Enable alerting on packets that have headers containing length fields for which the value is greater than the length of the packet. • debug-print-rule-groups-uncompiled – Prints uncompiled port group information. Disables failopen thread that allows inline traffic to pass while Snort is starting up. Turns off alerts generated by T/TCP options. prints information about the content being used for the fast pattern matcher. Dumps raw packet starting at link layer (snort -X). Only useful if Snort was configured with –enable-inline-init-failopen. [debug-print-fast-pattern] [bleedover-warnings-enabled] • debug-print-nocontent-rule-tests – Prints port group information during packet evaluation. Enables the dropping of bad packets identified by decoder (only applicable in inline mode). • debug-print-fast-pattern – For each rule with fast pattern content. Turns on character dumps (snort -C). config disable decode alerts config disable inline init failopen Turns off the alerts generated by the decode phase of Snort. (snort --disable-inline-init-failopen) Disables IP option length validation alerts. 34 .config detection: [debug] Options for detection engine debugging.. Faster execution than frag2 with less complex data management. The basic idea behind target-based IDS is that we tell the IDS information about hosts on the network so that it can avoid Ptacek & Newsham style evasion attacks based on information about how an individual target IP stack operates. Unfortunately. it is possible to evade the IDS. Preprocessors are loaded and configured using the preprocessor keyword. 2. As I like to say. There can be an arbitrary number of engines defined at startup with their own configuration. This is where the idea for “target-based IDS” came from.icir. there are ambiguities in the way that the RFCs define some of the edge conditions that may occur and when this happens different people implement certain aspects of their IP stacks differently. Target-based host modeling anti-evasion techniques. We can also present the IDS with topology information to avoid TTL-based evasions and a variety of other issues. Frag3 is intended as a replacement for the frag2 defragmentation module and was designed with the following goals: 1.1 Frag3 The frag3 preprocessor is a target-based IP defragmentation module for Snort. When IP stacks are written for different operating systems.org/vern/papers/activemap-oak03.engine is called.org/docs/ idspaper/. they are usually implemented by people who read the RFCs and then write their interpretation of what the RFC outlines into code. The packet can be modified or analyzed in an out-of-band manner using this mechanism. There are at least two preprocessor directives required to activate frag3.pdf. Frag3 was implemented to showcase and prototype a target-based module within Snort to test this idea. Check it out at. check out the famous Ptacek & Newsham paper at. Frag3 uses the sfxhash data structure and linked lists for data handling internally which allows it to have much more predictable and deterministic performance in any environment which should aid us in managing heavily fragmented environments. but after the packet has been decoded. Global Configuration 38 .2. Splay trees are excellent data structures to use when you have some assurance of locality of reference for the data that you are handling but in high speed. Target-based analysis is a relatively new concept in network-based intrusion detection. The format of the preprocessor directive in the Snort config file is: preprocessor <name>: <options> 2.. but that’s a topic for another day. For more detail on this issue and how it affects IDS. For an IDS this is a big problem. if the attacker has more information about the targets on a network than the IDS does. but only one global configuration. In an environment where the attacker can determine what style of IP defragmentation is being used on a particular target. Once we have this information we can start to really change the game for these complex modeling problems. Frag 3 Configuration Frag3 configuration is somewhat more complex than frag2. heavily fragmented environments the nature of the splay trees worked against the system and actually hindered performance. a global configuration directive and an engine instantiation..snort. IP List to bind this engine to.Select a target-based defragmentation mode. – policy <type> . – prealloc frags <number> . – min ttl <value> . – memcap <bytes> . This config option takes values equal to or greater than zero. The accepted range for this option is 1 . Default is 60 seconds.Limits the number of overlapping fragments per packet. the minimum is ”0”. Available types are first.255.Detect fragment anomalies. The default is ”0” (unlimited). This is an optional parameter.Defines smallest fragment size (payload size) that should be considered valid. linux. The known mappings are as follows. – max frags <number> . Fragments in the engine for longer than this period will be automatically dropped. last.Option to turn off the preprocessor. bsd.• Preprocessor name: frag3 global • Available options: NOTE: Global configuration options are comma separated. Default is 4MB. Default value is all.Alternate memory management mode. – disabled .Maximum simultaneous fragments to track. – bind to <ip list> . detect anomalies option must be configured for this option to take effect. – timeout <seconds> . – detect anomalies . Default is 1.Memory cap for self preservation. This engine will only run for packets with destination addresses contained within the IP List. – min fragment length <number> . Fragments smaller than or equal to this limit are considered malicious and an event is raised.Minimum acceptable TTL value for a fragment packet. When the preprocessor is disabled only the options memcap. Anyone who develops more mappings and would like to add to this list please feel free to send us an email! 39 . By default this option is turned off. bsdright. The default is ”0” (unlimited).Timeout for fragments. prealloc memcap. Default type is bsd. if detect anomalies is also configured. and prealloc frags are applied when specified with the configuration. – overlap limit <number> . The Paxson Active Mapping paper introduced the terminology frag3 is using to describe policy types. Engine Configuration • Preprocessor name: frag3 engine • Available options: NOTE: Engine configuration options are space separated. Use preallocated fragment nodes (faster in some situations). This is an optional parameter. detect anomalies option must be configured for this option to take effect. Default is 8192. 2.9-31SGI 1. bind_to [10.8.0.47.2smp Linux 2.10.5F IRIX 6.1 OS/2 (version unknown) OSF1 V3.4. The first two engines are bound to specific IP address ranges and the last one applies to all other traffic.0.2.Platform AIX 2 AIX 4.5.2.8 Tru64 Unix V5. first and last policies assigned.16-3 Linux 2.19-6.1.172.00 IRIX 4.V5.0/24 policy first.7.3 8.6.4.9.5.0 Linux 2. Packets that don’t fall within the address requirements of the first two engines automatically fall through to the third one.1-7.0 OSF1 V3. bind_to 192.4 SunOS 5.4 (RedHat 7.3) MacOS (version unknown) NCD Thin Clients OpenBSD (version unknown) OpenBSD (version unknown) OpenVMS 7.1 SunOS 4.0A.10 Linux 2.7-10 Linux 2.14-5.2.16.1.2.2 IRIX 6.20 HP-UX 11.1.3 IRIX64 6.1.5.5.4 Linux 2. detect_anomalies 40 .0.0.0/24.0/24] policy last.3 Cisco IOS FreeBSD HP JetDirect (printer) HP-UX B OSF1 V4. \ [track_udp <yes|no>]. Data on SYN.2 Stream5 The Stream5 preprocessor is a target-based TCP reassembly module for Snort.2. and the policies supported by Stream5 are the results of extensive research with many target operating systems. identify sessions that may be ignored (large data transfers. The methods for handling overlapping data. \ [flush_on_alert]. Some of these anomalies are detected on a per-target basis. \ [memcap <number bytes>]. FIN and Reset sequence numbers. \ [prune_log_max <bytes>]. such as data on SYN packets. ICMP messages are tracked for the purposes of checking for unreachable and service unavailable messages. a few operating systems allow data in TCP SYN packets. 2. [max_icmp <number>]. data received outside the TCP window. \ [track_icmp <yes|no>]. direction. [max_tcp <number>]. Read the documentation in the doc/signatures directory with filenames that begin with “123-” for information on the different event types. Stream5 Global Configuration Global settings for the Stream5 preprocessor. Stream API Stream5 fully supports the Stream API. [disabled] 41 . etc. For example. Target-Based Stream5. which effectively terminate a TCP or UDP session. introduces target-based actions for handling of overlapping data and other TCP anomalies.Frag 3 Alert Output Frag3 is capable of detecting eight different types of anomalies. etc). while others do not. Transport Protocols TCP sessions are identified via the classic TCP ”connection”. and update the identifying information about the session (application protocol. the rule ’flow’ and ’flowbits’ keywords are usable with TCP as well as UDP traffic. [show_rebuilt_packets]. It is capable of tracking sessions for both TCP and UDP. With Stream5. other protocol normalizers/preprocessors to dynamically configure reassembly behavior as required by the application layer protocol. TCP Timestamps. etc are configured via the detect anomalies option to the TCP configuration. Its event output is packet-based so it will work with all output modes of Snort. UDP sessions are established as the result of a series of UDP packets from two end points via the same set of ports. Anomaly Detection TCP protocol anomalies. etc) that can later be used by rules. [max_udp <number>]. like Frag3. preprocessor stream5_global: \ [track_tcp <yes|no>]. The default is ”65536”. The default is set to any. \ [max_queued_bytes <bytes>]. Session timeout. Flush a TCP stream when an alert is generated on that stream. The default is ”30”. Backwards compatibility. the minimum is ”1”. Memcap for TCP packet storage. minimum is ”1”. \ [require_3whs [<number secs>]]. One default policy must be specified. Print a message when a session terminates that was consuming more than the specified number of bytes. By default this option is turned off. max udp and max icmp are applied when specified with the configuration. The default is ”1048576” (1MB). [dont_reassemble_async]. [flush_factor <number segs>] Option bind to <ip addr> timeout <num seconds> \ Description IP address or network for this policy. Print/display packet after rebuilt (for debugging). \ [check_session_hijacking]. \ [protocol <client|server|both> <all|service name [service name]*>]. maximum is ”1048576”. max tcp. 42 . The default is ”8388608” (8MB). \ [dont_store_large_packets]. \ [ignore_any_rules]. \ [overlap_limit <number>]. The default is set to off. The default is ”yes”. When the preprocessor is disabled only the options memcap. \ [timeout <number secs>]. [max_window <number>]. Maximum simultaneous ICMP sessions tracked. maximum is ”1073741824” (1GB). The default is ”262144”. Option to disable the stream5 tracking. minimum is ”1”. Stream5 TCP Configuration Provides a means on a per IP address target to configure TCP policy. \ [small_segments <number> bytes <number> [ignore_ports number [number]*]]. Track sessions for ICMP. [ports <client|server|both> <all|number [number]*>]. minimum is ”32768” (32KB). [policy <policy_id>]. Maximum simultaneous UDP sessions tracked. maximum is ”1048576”. Maximum simultaneous TCP sessions tracked. [max_queued_segs <number segs>]. [detect_anomalies]. The default is ”yes”. The default is ”131072”. minimum is ”1”. The default is ”no”. The default is set to off. and that policy is not bound to an IP address or network. minimum can be either ”0” (disabled) or if not disabled the minimum is ”1024” and maximum is ”1073741824”. maximum is ”1048576”. [use_static_footprint_sizes]. This can have multiple occurrences. Track sessions for UDP. and the maximum is ”86400” (approximately 1 day). preprocessor stream5_tcp: \ [bind_to <ip_addr>]. per policy that is bound to an IP address or network. OpenBSD 3. Establish sessions only on completion of a SYN/SYN-ACK/ACK handshake.3 and newer Limits the number of overlapping packets per session. Alerts are generated (per ’detect anomalies’ option) for either the client or server when the MAC address for one side or the other does not match.2 and earlier windows Windows 2000. The default is set to off. A value of ”0” means unlimited. The default is ”0” (don’t consider existing sessions established). Limit the number of bytes queued for reassembly on a given TCP session to bytes.4 and newer old-linux Linux 2. and the maximum is ”255”. Using this option may result in missed attacks. Windows 95/98/ME win2003 Windows 2003 Server vista Windows Vista solaris Solaris 9. the minimum is ”0”. 43 . Use static values for determining when to build a reassembled packet to allow for repeatable tests. last Favor first overlapped segment. and a maximum of ”1073741824” (1GB). The default is ”0” (unlimited). The default is set to off. the minimum is ”0”. This option is intended to prevent a DoS against Stream5 by an attacker using an abnormally large window. The default is ”0” (unlimited). The default is set to queue packets. Detect and alert on TCP protocol anomalies. the minimum is ”0”. That is the highest possible TCP window per RFCs. Check for TCP session hijacking. NetBSD 2. Maximum TCP window allowed. first Favor first overlapped segment. The default is set to off. and the maximum is ”86400” (approximately 1 day).policy <policy id> overlap limit <number> max window <number> require 3whs [<number seconds>] detect anomalies check session hijacking use static footprint sizes dont store large packets dont reassemble async max queued bytes <bytes> The Operating System policy for the target OS. If an ethernet layer is not part of the protocol stack received by Snort. bsd FresBSD 4. Performance improvement to not queue large packets in reassembly buffer.x and newer linux Linux 2. The default is set to off. Windows XP. so using a value near the maximum is discouraged. A message is written to console/syslog when this limit is enforced.x and newer. Default is ”1048576” (1MB). The default is set to off. The policy id can be one of the following: Policy Name Operating Systems. This check validates the hardware (MAC) address from both sides of the connect – as established on the 3-way handshake against subsequent packets received on the session. there are no checks performed.x and newer hpux HPUX 11 and newer hpux10 HPUX 10 irix IRIX 6 and newer macos MacOS 10. This option should not be used production environments. with a non-zero minimum of ”1024”. The optional number of seconds specifies a startup timeout. Don’t queue packets for reassembly if traffic has not been seen in both directions.x and newer. This allows a grace period for existing sessions to be considered established during that interval immediately after Snort is started. and the maximum is ”1073725440” (65535 left shift 14). The default settings are ports client 21 23 25 42 53 80 110 111 135 136 137 139 143 445 513 514 1433 1521 2401 3306. PCRE. A value of ”0” means unlimited. there should be only one occurrence of the UDP configuration. This option can be used only in default policy. This can appear more than once in a given config. The service names can be any of those used in the host attribute table (see 2. or both and list of services in which to perform reassembly.3) or others specific to the network. This is a performance improvement and may result in missed attacks. defines the list of ports in which will be ignored for this rule. This feature requires that detect anomalies be enabled. or both and list of ports in which to perform reassembly. The first number is the number of consecutive segments that will trigger the detection rule. with a maximum of ”2048”. preprocessor stream5_udp: [timeout <number secs>]. with a non-zero minimum of ”2”. [ignore_any_rules] 44 . including any of the internal defaults (see 2. If only a bind to option is used with no other options that TCP policy uses all of the default values. ignore ports is optional. The second number is the minimum bytes for a segment to be considered ”small”. server. Stream5 UDP Configuration Configuration for UDP session tracking. The default value is ”0” (disabled). The default settings are ports client ftp telnet smtp nameserver dns http pop3 sunrpc dcerpc netbios-ssn imap login shell mssql oracle cvs mysql. The number of ports can be up to ”65535”. Specify the client. Using this does not affect rules that look at protocol headers. Don’t process any -> any (ports) rules for TCP that attempt to match payload if there are no port specific rules for the src or destination port.7. This can appear more than once in a given config. The minimum port allowed is ”1” and the maximum allowed is ”65535”. that is the default TCP policy. or byte test options. Since there is no target based binding.7). Useful in ips mode to flush upon seeing a drop in segment size after N segments of non-decreasing size. The default is ”off”. A message is written to console/syslog when this limit is enforced. derived based on an average size of 400 bytes. with a maximum of ”2048”. The default is ”2621”. ! △NOTE If no options are specified for a given TCP policy. server. The drop in size often indicates an end of request or response. Configure the maximum small segments queued. A message is written to console/syslog when this limit is enforced. The default value is ”0” (disabled). only those with content. Specify the client. and a maximum of ”1073741824” (1GB). Rules that have flow or flowbits will never be ignored. there should be only one occurrence of the ICMP configuration. A list of rule SIDs affected by this option are printed at Snort’s startup. and the maximum is ”86400” (approximately 1 day). ! △NOTE untested. preprocessor stream5_icmp: [timeout <number secs>] Option timeout <num seconds> Description Session timeout. This configuration maps two network segments to different OS policies. with all other traffic going to the default policy of Solaris. but NOT to any other source or destination port. The default is ”off”. track_tcp yes. Rules that have flow or flowbits will never be ignored. or byte test options. track_icmp no preprocessor stream5_tcp: \ policy first. With the ignore the ignore any rules option is effectively pointless. the minimum is ”1”.conf and can be used for repeatable tests of stream reassembly in readback mode. Using this does not affect rules that look at protocol headers. It is not ICMP is currently turned on by default. use_static_footprint_sizes preprocessor stream5_udp: \ ignore_any_rules 2. preprocessor stream5_global: \ max_tcp 8192. Since there is no target based binding. PCRE. This example configuration is the default configuration in snort. only those with content. in minimal code form and is NOT ready for use in production networks. ! △NOTE any rules option. the minimum is ”1”. if a UDP rule that uses any -> any ports includes either flow or flowbits. one for Windows and one for Linux.Option timeout <num seconds> ignore any rules Description Session timeout. 45 . Example Configurations 1. ! △NOTE any rules option. and the maximum is ”86400” (approximately 1 day). For example. Don’t process any -> any (ports) rules for UDP that attempt to match payload if there are no port specific rules for the src or destination port. Because of the potential impact of disabling a flowbits rule. track_udp yes. the ’ignored’ any -> any rule will be applied to traffic to/from port 53. The default is ”30”. the ignore any rules option will be disabled in this case. The default is ”30”. Stream5 ICMP Configuration Configuration for ICMP session tracking. This is a performance improvement and may result in missed attacks. a UDP rule will be ignored except when there is another port specific rule With the ignore that may be applied to the traffic. if a UDP rule specifies destination port 53. This phase assumes the attacking host has no prior knowledge of what protocols or services are supported by the target. In the Reconnaissance phase. so we track this type of scan through the scanned host. otherwise. most queries sent by the attacker will be negative (meaning that the service ports are closed). sfPortscan alerts for the following types of portsweeps: 46 . policy linux stream5_tcp: policy solaris 2. this phase would not be necessary.1. Distributed portscans occur when multiple hosts query one host for open services. This is the traditional place where a portscan takes place. an attacker determines what types of network protocols or services a host supports. Nmap encompasses many. In the nature of legitimate network communications. is designed to detect the first phase in a network attack: Reconnaissance. negative responses from hosts are rare.preprocessor preprocessor preprocessor preprocessor stream5_global: track_tcp yes stream5_tcp: bind_to 192. sfPortscan will currently alert for the following types of Nmap scans: • TCP Portscan • UDP Portscan • IP Portscan These alerts are for one→one portscans.0/24. and rarer still are multiple negative responses within a given amount of time. which are the traditional types of scans.1. This tactic helps hide the true identity of the attacker. of the current portscanning techniques. Most of the port queries will be negative.3 sfPortscan The sfPortscan module. As the attacker has no beforehand knowledge of its intended target. sfPortscan was designed to be able to detect the different types of scans Nmap can produce.2.168. One of the most common portscanning tools in use today is Nmap.. ! △NOTE Negative queries will be distributed among scanning hosts. Our primary objective in detecting portscans is to detect and track these negative responses.0/24. if not all. developed by Sourcefire. policy windows stream5_tcp: bind_to 10..1. only the attacker has a spoofed source address inter-mixed with the real scanning address. one host scans multiple ports on another host. but tags based on the original scan alert. For example. can trigger these alerts because they can send out many connection attempts within a very small amount of time. sfPortscan will also display any open ports that were scanned. One host scans a single port on multiple hosts. A filtered alert may go off before responses from the remote hosts are received.• TCP Portsweep • UDP Portsweep • IP Portsweep • ICMP Portsweep These alerts are for one→many portsweeps. if an attacker portsweeps a web farm for port 80. 47 . On TCP scan alerts. sfPortscan will only track open ports after the alert has been triggered. Open port events are not individual alerts. ! △NOTE The characteristics of a portsweep scan may not result in many negative responses. such as NATs.. sfPortscan only generates one alert for each host pair in question during the time window (more on windows below). On TCP sweep alerts however. Active hosts. It’s also a good indicator of whether the alert is just a very active legitimate host. we will most likely not see many negative responses. This usually occurs when a new exploit comes out and the attacker is looking for a specific service. Stream gives portscan direction in the case of connectionless protocols like ICMP and UDP. this setting will never trigger a Filtered Scan alert because of a lack of error responses.“Medium” alerts track connection counts. etc). • medium . proxies. and specific ports on those hosts to watch. sense level <level> Available options: • low . 6.2. logfile <file> This option will output portscan events to the file specified. IP address using CIDR notation. This setting is based on a static time window of 60 seconds. If file does not contain a leading slash. networks. 5. However. Optionally.2. proto <protocol> Available options: • TCP • UDP • IGMP • ip proto • all 2. ignore scanners <ip1|ip2/cidr[ [port|port2-port3]]> Ignores the source of scan alerts. as described in Section 2. 7. The parameter is the same format as that of watch ip. • high . watch ip <ip1|ip2/cidr[ [port|port2-port3]]> Defines which IPs.“High” alerts continuously track hosts on a network using a time window to evaluate portscan statistics for that host. 4.conf. You should enable the Stream preprocessor in your snort.“Low” alerts are only generated on error packets sent from the target host. The parameters you can use to configure the portscan module are: 1. ignore scanned <ip1|ip2/cidr[ [port|port2-port3]]> Ignores the destination of scan alerts. but is very sensitive to active hosts. The list is a comma separated list of IP addresses. this setting should see very few false positives. This setting may false positive on active hosts (NATs. after which this window is reset. 48 . so the user may need to deploy the use of Ignore directives to properly tune this directive. DNS caches. and because of the nature of error responses. A ”High” setting will catch some slow scans because of the continuous monitoring. scan type <scan type> Available options: • portscan • portsweep • decoy portscan • distributed portscan • all 3. This most definitely will require the user to tune sfPortscan. IPs or networks not falling into this range are ignored if this option is used.sfPortscan Configuration Use of the Stream5 preprocessor is required for sfPortscan. and so will generate filtered scan alerts. ports are specified after the IP address/CIDR using a space and can be either a single port or a range denoted by a dash. The parameter is the same format as that of watch ip. this file will be placed in the Snort config dir. Any valid configuration may have ”disabled” added to it. IP range. 10. This option disables the preprocessor. This can lead to false alerts. This includes any IP options. the packet looks like the IP portion of the packet that caused the portscan alert to be generated. include midstream This option will include sessions picked up in midstream by Stream5. The characteristics of the packet are: Src/Dst MAC Addr == MACDAD IP Protocol == 255 IP TTL == 0 Other than that. which is necessary to detect ACK scans. disabled This optional keyword is allowed with any policy to avoid packet processing. especially under heavy load with dropped packets. and port range. The other options are parsed but not used. etc. which is why the option is off by default. port count. When the preprocessor is disabled only the memcap option is applied when specified with the configuration. The sfPortscan alert output was designed to work with unified packet logging. then the user won’t see open port alerts. snort generates a pseudo-packet and uses the payload portion to store the additional portscan information of priority count. 49 . especially under heavy load with dropped packets. However. The open port information is stored in the IP payload and contains the port that is open. because open port alerts utilize the tagged packet output system. The payload and payload size of the packet are equal to the length of the additional portscan information that is logged. which is why the option is off by default.. 9. The size tends to be around 100 . connection count. detect ack scans This option will include sessions picked up in midstream by the stream module. this can lead to false alerts. so it is possible to extend favorite Snort GUIs to display portscan alerts and the additional information in the IP payload using the above packet characteristics. This means that if an output system that doesn’t print tagged packets is used. IP count. Open port alerts differ from the other portscan alerts.8.200 bytes. The higher the priority count.3 -> 192. If no watch ip is defined.168. Portscans (one-to-one) display the scanner IP.168. This is accurate for connection-based protocols. High connection count and low priority count would indicate filtered (no response received from target).169.169. The analyst should set this option to the list of CIDR blocks and IPs that they want to watch.603880 event_id: 2 192. Tuning sfPortscan The most important aspect in detecting portscans is tuning the detection engine for your network(s). and is more of an estimate for others. the more bad responses have been received. and explained further below: Time: 09/08-15:07:31.4 Port/Proto Count: 200 Port/Proto Range: 20:47557 If there are open ports on the target. Event id/Event ref These fields are used to link an alert with the corresponding Open Port tagged packet 2. For active hosts this number will be high regardless. Port Count Port Count keeps track of the last port contacted and increments this number when that changes. one or more additional tagged packet(s) will be appended: Time: 09/08-15:07:31. It’s important to correctly set these options. For one-to-one scans.168.5 (portscan) TCP Filtered Portscan Priority Count: 0 Connection Count: 200 IP Count: 2 Scanner IP Range: 192.168.3 -> 192. 4. unreachables). Scanned/Scanner IP Range This field changes depending on the type of alert. We use this count (along with IP Count) to determine the difference between one-to-one portscans and one-to-one decoys.603881 event_ref: 2 192. 6. 3.169. Whether or not a portscan was filtered is determined here. Here are some tuning tips: 1. 50 .169. and increments the count if the next IP is different.3:192. ignore scanners.168. The watch ip option is easy to understand. 5. Connection Count Connection Count lists how many connections are active on the hosts (src or dst). Use the watch ip. IP Count IP Count keeps track of the last IP to contact a host. this is a low number. and one-to-one scans may appear as a distributed scan.168. sfPortscan will watch all network traffic. and ignore scanned options.169.Log File Output Log file output is displayed in the following format. Portsweep (one-to-many) scans display the scanned IP range. Priority Count Priority Count keeps track of bad responses (resets.169.5 (portscan) Open Port Open Port: 38458 1. By default. 2. the higher the better. this ratio should be high. If the host continually generates these types of alerts. lower the sensitivity level. but it’s also important that the portscan detection engine generate alerts that the analyst will find informative. it runs against traffic on ports 111 and 32771. and nfs servers. since these are more prone to false positives. this ratio should be high. Filtered scan alerts are much more prone to false positives. sfPortscan may not generate false positives for these types of hosts. You get the best protection the higher the sensitivity level. This indicates that there were many connections to the same port.2. Connection Count / Port Count: This ratio indicates an estimated average of connections per port. lower the sensitivity level. Make use of the Priority Count. and Port Range to determine false positives. but for now the user must manually do this. 2. If none of these other tuning techniques work or the analyst doesn’t have the time for tuning. Port Count / IP Count: This ratio indicates an estimated average of ports connected to per IP. The easiest way to determine false positives is through simple ratio estimations. this ratio should be low. it will only process client-side traffic. For portscans. DNS cache servers. syslog servers. indicating that the scanning host connected to few ports but on many hosts. Some of the most common examples are NAT IPs. the analyst will know which to ignore it as. is because the priority count is included in the connection count and the above comparisons take that into consideration. we hope to automate much of this analysis in assigning a scope level and confidence level. IP Range. These responses indicate a portscan and the alerts generated by the low sensitivity level are highly accurate and require the least tuning. For portsweeps. 3. For portsweeps. this ratio should be low. The following is a list of ratios to estimate and the associated values that indicate a legitimate scan and not a false positive. If the host is generating portsweep events.4 RPC Decode The rpc decode preprocessor normalizes RPC multiple fragmented records into a single un-fragmented record. In the future. this ratio should be high and indicates that the scanned host’s ports were connected to by fewer IPs. IP Count.The ignore scanners and ignore scanned options come into play in weeding out legitimate hosts that are very active on your network. For portscans. When determining false positives. then add it to the ignore scanners option. this ratio should be low. The low sensitivity level only generates alerts based on error responses. The Priority Count play an important role in tuning because the higher the priority count the more likely it is a real portscan or portsweep (unless the host is firewalled). add it to the ignore scanners list or use a lower sensitivity level. 4. the alert type is very important. If the host is generating portscan alerts (and is the host that is being scanned). Depending on the type of alert that the host generates. Connection Count / IP Count: This ratio indicates an estimated average of connections per IP. This indicates that each connection was to a different port. For portsweeps. The portscan alert details are vital in determining the scope of a portscan and also the confidence of the portscan. Connection Count. The reason that Priority Count is not included. For portscans. It does this by normalizing the packet into the packet buffer. So be much more suspicious of filtered portscans. Format preprocessor rpc_decode: \ <ports> [ alert_fragments ] \ [no_alert_multiple_requests] \ [no_alert_large_fragments] \ [no_alert_incomplete] 51 . Port Count. If stream5 is enabled. The low sensitivity level does not catch filtered scans. Many times this just indicates that a host was very active during the time period in question. If all else fails. but be aware when first tuning sfPortscan for these IPs. add it to the ignore scanned option. Most of the false positives that sfPortscan may generate are of the filtered scan alert type. 5 Performance Monitor This preprocessor measures Snort’s real-time and theoretical maximum performance. Don’t alert when the sum of fragmented records exceeds one packet. Don’t alert when there are multiple records in one packet. it should have an output mode enabled.Option alert fragments no alert multiple requests no alert large fragments no alert incomplete Description Alert on any fragmented RPC record. either “console” which prints statistics to the console window or “file” with a file name. By default. Don’t alert when a single fragment record exceeds the size of one packet.. Whenever this preprocessor is turned on. 2. . Rules without content are not filtered via the fast pattern matcher and are always evaluated. • atexitonly .Represents the number of seconds between intervals. • time . • file . A high non-qualified event to qualified event ratio can indicate there are many rules with either minimal content or no content that are being evaluated without success.Dump stats for entire life of Snort. so if possible. • pktcnt . This is only valid for uniprocessor machines.Prints statistics in a comma-delimited format to the file that is specified. This option can produce large amounts of output. • max . This prints out statistics as to the number of rules that were evaluated and didn’t match (non-qualified events) vs. since many operating systems don’t keep accurate kernel statistics for multiple CPUs . Both of these directives can be overridden on the command line with the -Z or --perfmon-file options. since checking the time sample reduces Snort’s performance. this is 10000. Snort will log a distinctive line to this file with a timestamp to all readers to easily identify gaps in the stats caused by Snort not running.Prints out statistics about the type of traffic and protocol distributions that Snort is seeing. reset is used. This boosts performance. You may also use snortfile which will output into your defined Snort log directory.Turns on the theoretical maximum performance that Snort calculates given the processor speed and current performance. • accumulate or reset . At startup. Not all statistics are output to this file. • events . generic contents are more likely to be selected for evaluation than those with longer. By default. the number of rules that were evaluated and matched (qualified events). more unique contents. 54 . By default.Turns on event reporting.Prints statistics at the console. adding a content rule option to those rules can decrease the number of times they need to be evaluated and improve performance. Rules with short.Defines which type of drop statistics are kept by the operating system. • console . the table will start to prune the statistics for the least recently seen host pairs to free memory.x. • flow-ip . • flow-ip-file . where x will be incremented each time the comma delimited file is rolled over. The current version of HTTP Inspect only handles stateless processing. Within HTTP Inspect. This means that HTTP Inspect looks for HTTP fields on a packet-by-packet basis.Defines the maximum size of the comma-delimited file. HTTP Inspect works on both client requests and server responses. The minimum is 4096 bytes and the maximum is 2147483648 bytes (2GB). • flow-ip-memcap .6 HTTP Inspect HTTP Inspect is a generic HTTP decoder for user applications.csv pktcnt 1000 2. and normalize the fields. and will be fooled if packets are not reassembled. This works fine when there is another module handling the reassembly. HTTP Inspect will decode the buffer. The following example gives the generic global configuration format: 55 . Users can configure individual HTTP servers with a variety of options. Before the file exceeds this size. find HTTP fields.2. Given a data buffer. Future versions will have a stateful processing mode which will hook into various reassembly modules. it will be rolled into a new date stamped file of the format YYYY-MM-DD. followed by YYYY-MM-DD.Prints the flow IP statistics in a comma-delimited format to the file that is specified.. Global Configuration The global configuration deals with configuration options that determine the global functioning of HTTP Inspect.• max file size . as well as the IP addresses of the host pairs in human-readable format. HTTP Inspect has a very “rich” user configuration. For each pair of hosts for which IP traffic has been seen. there are two areas of configuration: global and server.Sets the memory cap on the hash table used to store IP traffic statistics for host pairs. but there are limitations in analyzing the protocol. This value is in bytes and the default value is 52428800 (50MB). Examples preprocessor perfmonitor: \ time 30 events flow file stats. Once the cap has been reached. All of the statistics mentioned above. which should allow the user to emulate any type of web server. The default is the same as the maximum. are included. For US servers. 2. this inspects all network traffic. then you may get a lot of proxy alerts. iis unicode map <map filename> [codemap <integer>] This is the global iis unicode map file. This option along with compress depth and 56 . 4. Please note that if users aren’t required to configure web proxy use. The default for this option is 2920. It is called unicode. please only use this feature with traditional proxy environments. By configuring HTTP Inspect servers and enabling allow proxy use. The iis unicode map is a required configuration parameter. In the future. Configuration.conf or be specified via a fully-qualified path to the map file. 6. and alerts if HTTP traffic is seen. A Microsoft US Unicode codepoint map is provided in the Snort source etc directory by default. This option is turned off by default. The map file can reside in the same directory as. This value can be set from 3276 bytes to 100MB.map and should be used if no other codepoint map is available.snort. but for right now. Blind firewall proxies don’t count. detect anomalous servers This global configuration option enables generic HTTP server traffic inspection on non-HTTP configured ports. This value can be set from 1 to 65535. max gzip mem This option determines (in bytes) the maximum amount of memory the HTTP Inspect preprocessor will use for decompression. the codemap is usually 1252. This value can be set from 1 to 65535. ! △NOTE Remember that this configuration is for the global IIS Unicode map. proxy alert This enables global alerting on HTTP server proxy usage.org/ dl/contrib/. which is available at. we want to limit this to specific networks so it’s more useful. decompress depth <integer> This option specifies the maximum amount of decompressed data to obtain from the compressed packet payload. you’ll get an error if you try otherwise. compress depth <integer> This option specifies the maximum amount of packet payload to decompress. The default for this option is 1460. 5.c. 3. So. individual servers can reference their own IIS Unicode map. A tool is supplied with Snort to generate custom Unicode maps--ms unicode generator. 2. ! △NOTE It is suggested to set this value such that the max gzip session calculated as follows is at least 1.1 profile all ports { 80 } Configuration by Multiple IP Addresses This format is very similar to “Configuration by IP Address”. There is a limit of 40 IP addresses or CIDR notations per http inspect server line. Any valid configuration may have ”disabled” added to it. Example Global Configuration preprocessor http_inspect: \ global iis_unicode_map unicode. the only difference being that specific IPs can be configured.1 10.decompress depth determines the gzip sessions that will be decompressed at any given instant. Default This configuration supplies the default server configuration for any server that is not individually configured.0/24 } profile all ports { 80 } 57 . Most of your web servers will most likely end up using the default configuration. This option disables the preprocessor.1. Example Multiple IP Configuration preprocessor http_inspect_server: \ server { 10. ”compress depth” and ”decompress depth” options are applied when specified with the configuration. When the preprocessor is disabled only the ”max gzip mem”.1. disabled This optional keyword is allowed with any policy to avoid packet processing. Other options are parsed but not used.1.map 1252 Server Configuration There are two types of server configurations: default and by IP address. Example IP Configuration preprocessor http_inspect_server: \ server 10. Example Default Configuration preprocessor http_inspect_server: \ server default profile all ports { 80 } Configuration by IP Address This format is very similar to “default”. the only difference being that multiple IPs can be specified via a space separated list.1. The default value for this option is 838860. max gzip session = max gzip mem /(decompress depth + compress depth) 7.2. all The all profile is meant to normalize the URI using most of the common tricks available. alert off apache whitespace on. This differs from the iis profile by only accepting UTF-8 standard Unicode encoding and not accepting backslashes as legitimate slashes. In other words. alert off multiple slash on. apache The apache profile is used for Apache web servers.Server Configuration Options Important: Some configuration options have an argument of ‘yes’ or ‘no’. like IIS does. backslashes. apache. This argument specifies whether the user wants the configuration option to generate an HTTP Inspect alert or not. profile apache sets the configuration options described in Table 2. alert on bare byte decoding on. Table 2. and iis4 0.0. iis5 0 In IIS 4. This is a great profile for detecting all types of attacks. iis The iis profile mimics IIS servers. regardless of the HTTP server. iis. there was a double decoding vulnerability. alert off double decoding on. profile all sets the configuration options described in Table 2. We alert on the more serious forms of evasions. so it’s disabled by default. but are not required for proper operation. profile <all|apache|iis|iis5 0|iis4 0> Users can configure HTTP Inspect by using pre-defined HTTP server profiles. and rules based on HTTP traffic will still trigger. HTTP normalization will still occur. 1-C.3. alert off directory normalization on. iis4 0. number of headers not checked 1-B. alert on %u decoding on. whether set to ‘yes’ or ’no’.0 and IIS 5. alert on iis backslash on.1 and beyond. Profiles allow the user to easily configure the preprocessor for a certain type of server. alert off webroot on. Double decode is not supported in IIS 5. except they will alert by default if a URL has a double encoding. alert on iis unicode codepoints on. 1-D. So that means we use IIS Unicode codemaps for each server. 1-A. %u encoding.5. only the alerting functionality.. iis5 0. The ‘yes/no’ argument does not specify whether the configuration option itself is on or off. double decoding. etc. alert off iis delimiter on. 58 . alert on non strict URL parsing on tab uri delimiter is set max header length 0.4. These two profiles are identical to iis. profile iis sets the configuration options described in Table 2. bare-byte encoding. Apache also accepts tabs as whitespace. There are five profiles available: all. header length not checked max headers 0. alert on utf 8 encoding on..1.6. HTTPS traffic is encrypted and cannot be decoded with HTTP Inspect. To ignore HTTPS traffic.. Example preprocessor http_inspect_server: \ server 1.1. alert on apache whitespace on. 59 . alert off directory normalization on. default.1 profile all ports { 80 3128 } 2. alert off multiple slash on. >]} This is how the user configures which ports to decode on the HTTP server.Table 2. However. header length not checked max headers 0. use the SSL preprocessor. ports {<port> [<port>< . alert off non strict url parsing on tab uri delimiter is set max header length 0. number of headers not checked 1-E.4: Options for the apache Profile Option Setting server flow depth 300 client flow depth 300 post depth 0 chunk encoding alert on chunks larger than 500000 bytes ASCII decoding. no profile The default options used by HTTP Inspect do not use a profile and are described in Table 2. alert off webroot on. 6. But the ms unicode generator program tells you which codemap to use for you server. By turning this option the HTTP response will be thoroughly inspected.snort. The Cookie header line is extracted and stored in HTTP Cookie buffer for HTTP requests and Set-Cookie is extracted and stored in HTTP Cookie buffer for HTTP responses. The different fields of a HTTP response such as status code.org/dl/contrib/ directory. You should select the config option ”extended response inspection” before configuring this option. http stat code. alert on non strict URL parsing on max header length 0. status message. extended response inspection This enables the extended HTTP response inspection. When the compressed data is spanned 60 . ! △NOTE When this option is turned on. alert on bare byte decoding. cookie (when enable cookie is configured) and body are extracted and saved into buffers.Table 2. When using this. this is usually 1252.org web site at. iis unicode map <map filename> codemap <integer> The IIS Unicode map is generated by the program ms unicode generator. By default the cookie inspection and extraction will be turned off. alert on %u decoding on. the user needs to specify the file that contains the IIS Unicode map and also specify the Unicode map to use. one should use the http modifiers with content such as http header. alert on iis backslash on. http stat msg and http cookie. alert on double decoding on. Executing this program generates a Unicode map for the system that it was run on. So the decompression will end when either the ’compress depth’ or ’decompress depth’ is reached or when the compressed data ends. You can select the correct code page by looking at the available code pages that the ms unicode generator outputs. Different rule options are provided to inspect these buffers. alert off multiple slash on. inspect gzip This option specifies the HTTP inspect module to uncompress the compressed data(gzip/deflate) in HTTP response. So. This program is located on the Snort. to get the specific Unicode mappings for an IIS web server. alert off webroot on. alert off iis delimiter on. 5. alert on iis unicode codepoints on. To search for patterns in the header of the response. Decompression is done across packets. you run this program on that server and use that Unicode map in this configuration. The default http response inspection does not inspect the various fields of a HTTP response. headers. number of headers not checked 3. In both cases the header name is also stored along with the cookie. it’s the ANSI code page. alert off directory normalization on. 4. For US servers. header length not checked max headers 0. When extended response inspection is turned off the server flow depth is applied to the entire HTTP response (including headers). alert off multiple slash on. This option can be used to balance the needs of IDS performance and level of inspection of HTTP server response data.6: Default HTTP Inspect Options Option Setting port 80 server flow depth 300 client flow depth 300 post depth -1 chunk encoding alert on chunks larger than 500000 bytes ASCII decoding on. 7. The XFF/True-Client-IP Original client IP address is logged only with unified2 output and is not logged with console (-A cmg) output. enable xff This option enables Snort to parse and log the original client IP present in the X-Forwarded-For or True-ClientIP HTTP request headers along with the generated events. unlimited decompress This option enables the user to decompress unlimited gzip data (across multiple packets). (i. alert off iis delimiter on.Table 2. the decompressed data from different packets are not combined while inspecting). it is applied to the HTTP response body (decompressed data when inspect gzip is turned on) and not the HTTP headers. Most of these rules target either the HTTP header. 8. Also the amount of decompressed data that will be inspected depends on the ’server flow depth’ configured.Decompression will stop when the compressed data ends or when a out of sequence packet is received. Snort should be configured with the –enable-zlib flag. it is suggested to set the ’compress depth’ and ’decompress depth’ to its maximum values. 9. number of headers not checked across multiple packets.e. or the content 61 . alert off apache whitespace on. alert off non strict URL parsing on max header length 0. alert off utf 8 encoding on. When extended response inspection is turned on. This tool is present in the tools/u2spewfoo directory of snort source tree. ! △NOTE The original client IP from XFF/True-Client-IP in unified2 logs can be viewed using the tool u2spewfoo. Snort rules are targeted at HTTP server response traffic and when used with a small flow depth value may cause false negatives. But the decompressed data are individually inspected. server flow depth <integer> This specifies the amount of server response payload to inspect. The decompression in a single packet is still limited by the ’compress depth’ and ’decompress depth’. alert on iis backslash on. To ensure unlimited decompression. alert off directory normalization on. header length not checked max headers 0. ! △NOTE To enable compression of HTTP server response. Unlike client flow depth this option is applied per TCP session. alert off webroot on. the state of the last decompressed packet is used to decompressed the data of the next packet. 11. Values above 0 tell Snort the number of bytes to inspect of the server response (excluding the HTTP headers when extended response inspection is turned on) in a given HTTP session. the entire payload will be inspected. make sure you have this option turned on. Inversely. a value of 0 causes Snort to inspect all the client post message. Inversely. The value can be set from -1 to 65495. Only packets payloads starting with ’HTTP’ will be considered as the first packet of a server response. Values above 0 tell Snort the number of bytes to inspect in the first packet of the client request. If more than flow depth bytes are in the payload of the HTTP response packet in a session only flow depth bytes of the payload will be inspected for that session.. A value of -1 causes Snort to ignore all the data in the post message. It is suggested to set the server flow depth to its maximum value. ! △NOTE server flow depth is the same as the old flow depth option.a %2f = /. Unlike server flow depth this value is applied to the first packet of the HTTP request. Headers are usually under 300 bytes long. value of -1 causes Snort to ignore the HTTP response body data and not the HTTP headers. a value of 0 causes Snort to inspect all HTTP server payloads defined in ”ports” (note that this will likely slow down IDS performance). When the extended response inspection is turned on. This value can be set from -1 to 65535. 62 . This increases the performance by inspecting only specified bytes in the post message. but your mileage may vary. A value of -1 causes Snort to ignore all server side traffic for ports defined in ports when extended response inspection is turned off. If less than flow depth bytes are in the payload of the HTTP response packets in a given session.that is likely to be in the first hundred or so bytes of non-header data. 14. so for any Apache servers. you may be interested in knowing when you have a UTF-8 encoded URI. etc. a value of 0 causes Snort to inspect all HTTP client side traffic defined in ”ports” (note that this will likely slow down IDS performance).k. This value can be set from -1 to 1460. which will be deprecated in a future release. Apache uses this standard. If more than flow depth bytes are in the payload of the first packet only flow depth bytes of the payload will be inspected. utf 8 <yes|no> The utf-8 decode option tells HTTP Inspect to decode standard UTF-8 Unicode sequences that are in the URI. When utf 8 is enabled. This option is turned off by default and is not supported with any of the profiles. If less than flow depth bytes are in the TCP payload (HTTP request) of the first packet. 12. post depth <integer> This specifies the amount of data to inspect in a client post message. It is suggested to set the server flow depth to its maximum value. It is suggested to set the client flow depth to its maximum value. This abides by the Unicode standard and only uses % encoding. It is normal to see ASCII encoding usage in URLs. Note that the 1460 byte maximum flow depth applies to stream reassembled packets as well. 13. It primarily eliminates Snort from inspecting larger HTTP Cookies that appear at the end of many client request Headers. Rules that are meant to inspect data in the payload of the HTTP response packets in a session beyond 65535 bytes will be ineffective unless flow depth is set to 0. extended ascii uri This option enables the support for extended ASCII codes in the HTTP request URI. client flow depth <integer> This specifies the amount of raw client request payload to inspect. The default value for server flow depth is 300. the entire payload will be inspected. so it is recommended that you disable HTTP Inspect alerting for this option. but this will be prone to false positives as legitimate web clients use this type of encoding. ascii <yes|no> The ascii decode option tells us whether to decode encoded ASCII chars. A value of -1 causes Snort to ignore all client side traffic for ports defined in ”ports. It is not a session based flow depth. It has a default value of 300. Rules that are meant to inspect data in the payload of the first packet of a client request beyond 1460 bytes will be ineffective unless flow depth is set to 0. ASCII decoding is also enabled to enforce correct functioning. %2e = .” Inversely. As for alerting. a. Note that the 65535 byte maximum flow depth applies to stream reassembled packets as well. The default value is -1. 10. 17. iis unicode <yes|no> The iis unicode option turns on the Unicode codepoint mapping. bare byte <yes|no> Bare byte encoding is an IIS trick that uses non-ASCII characters as valid values when decoding UTF-8 values. If no iis unicode map is specified before or after this option. ASCII. otherwise. An ASCII character is encoded like %u002f = /. When base36 is enabled. For instance. and %u. The iis unicode option handles the mapping of non-ASCII codepoints that the IIS server accepts and decodes normal UTF-8 requests. base36 <yes|no> This is an option to decode base36 encoded chars. alert on all ‘/’ or something like that. you must enable also enable utf 8 yes. 18. the default codemap is used. use no. The alert on this decoding should be enabled. 20. etc. ASCII encoding is also enabled to enforce correct behavior. How this works is that IIS does two passes through the request URI. like %uxxxx.jp/˜ shikap/patch/spp\_http\_decode. If there is no iis unicode map option specified with the server config. this is really complex and adds tons of different encodings for one character.rim. it seems that all types of iis encoding is done: utf-8 unicode. iis unicode uses the default codemap. You should alert on the iis unicode option. and %u. In the second pass. Please use this option with care. This is not in the HTTP standard.” 23. then configure with a yes. It’s flexible.or. Anyway. The xxxx is a hex-encoded value that correlates to an IIS Unicode codepoint. bare byte. and then UTF-8 is decoded in the second stage. If %u encoding is enabled. This value can most definitely be ASCII. So a request URI of “/foo\bar” gets normalized to “/foo/bar.15. because there are no legitimate clients that encode UTF-8 this way since it is non-standard. because you could configure it to say. because it is seen mainly in attacks and evasion attempts. u encode <yes|no> This option emulates the IIS %u encoding scheme. so something like: “foo/////////bar” get normalized to “foo/bar. Bare byte encoding allows the user to emulate an IIS server and interpret non-standard encodings correctly. To alert on UTF-8 decoding. because we are not aware of any legitimate clients that use this encoding. so ASCII is also enabled to enforce correct decoding. %u002e = . a user may not want to see null bytes in the request URI and we can alert on that. You have to use the base36 option with the utf 8 option. iis backslash <yes|no> Normalizes backslashes to slashes.patch. non rfc char {<byte> [<byte . In the first pass.yk. This is again an IIS emulation. the following encodings are done: ASCII.. doing decodes in each one. 22.>]} This option lets users receive an alert if certain non-RFC chars are used in a request URI. double decode <yes|no> The double decode option is once again IIS-specific and emulates IIS functionality. How the %u encoding scheme works is as follows: the encoding scheme is started by a %u followed by 4 characters. directory <yes|no> This option normalizes directory traversals and self-referential directories. bare byte. multi slash <yes|no> This option normalizes multiple slashes in a row. so be careful. as all non-ASCII values have to be encoded with a %. The directory: 63 . Don’t use the %u option. this option will not work. 21. 16. When double decode is enabled. This option is based on info from:.. We leave out utf-8 because I think how this works is that the % encoded utf-8 is decoded to the Unicode byte in the first pass.” If you want an alert when multiple slashes are seen. So it is most likely someone trying to be covert. You should alert on %u encodings. ASCII and UTF-8 decoding are also enabled to enforce correct decoding.. because base36 won’t work. When iis unicode is enabled. 19. Apache uses this.html alsjdfk alsj lj aj la jsj s\n”./bar gets normalized to: /foo/bar The directory: /foo/. like whisker -i 4. 30. 64 . then this option does nothing. and may also alert on HTTP tunneling that uses chunk encoding. no alerts This option turns off all alerts that are generated by the HTTP Inspect preprocessor module. Only use this option on servers that will accept URIs like this: ”get /index./bar gets normalized to: /foo/bar If you want to configure an alert. This should limit the alerts to IDS evasion type attacks. 24. we always take this as standard since the most popular web servers accept it./foo/fake\_dir/. This picks up the Apache chunk encoding exploits. so if the emulated web server is Apache. 26. 31. specify yes. specify no. By default. apache whitespace <yes|no> This option deals with the non-RFC standard of using tab for a space delimiter. A good argument value is 300 characters. This means that no alert will be generated if the proxy alert global keyword has been used. No argument is specified. Since this is common. The allow proxy use keyword is just a way to suppress unauthorized proxy use for an authorized server. 25. 28. no pipeline req This option turns HTTP pipeline decoding off.. The non strict option assumes the URI is between the first and second space even if there is no valid HTTP identifier after the second space. but may also be false positive prone. the user is allowing proxy use on this server. pipeline requests are inspected for attacks. Alerts on this option may be interesting. chunk length <non-zero positive integer> This option is an anomaly detector for abnormally large chunk sizes. enable this option. But you can still get an alert on this option. oversize dir length <non-zero positive integer> This option takes a non-zero positive integer as an argument. otherwise. This alert may give false positives. It is only inspected with the generic pattern matching. pipeline requests are not decoded and analyzed per HTTP protocol field. but Apache takes this non-standard delimiter was well. If the proxy alert keyword is not enabled. non strict This option turns on non-strict URI parsing for the broken way in which Apache servers will decode a URI. This has no effect on HTTP rules in the rule set. The argument specifies the max char directory length for URL directory. If a url directory is larger than this argument size. 27. and is a performance enhancement if needed. an alert is generated. since some web sites refer to files using directory traversals. allow proxy use By specifying this keyword. 29. iis delimiter <yes|no> This started out being IIS-specific. but when this option is enabled. The alert is off by default. or ”utf-32be”. No argument is specified. Requests that exceed this length will cause a ”Long Header” alert. enable this optimization. inspect uri only This is a performance optimization. 39. ) and the we inspect the following URI: get /foo. webroot <yes|no> This option generates an alert when a directory traversal traverses past the web server root directory. To enable. Apache accepts tab as a delimiter. The integer is the maximum number of HTTP client request header fields. specify an integer argument to max header length of 1 to 65535. multi-slash. normalize cookies This option turns on normalization for HTTP Cookie Fields (using the same configuration parameters as the URI normalization (ie. generating an alert if the extra bytes are non-zero. not including Cookies (using the same configuration parameters as the URI normalization (ie. The inspect uri only configuration turns off all forms of detection except uricontent inspection. To enable. 34. content: "foo". 65 . only the URI portion of HTTP requests will be inspected for attacks. ”utf-32le”. which is associated with certain web attacks. For IIS. As this field usually contains 90-95% of the web attacks. When enabled. For example. It is useful for normalizing data in HTTP Cookies that may be encoded. Requests that contain more HTTP Headers than this value will cause a ”Max Header” alert. 37. etc.). normalize headers This option turns on normalization for HTTP Header Fields. etc. The integer is the maximum length allowed for an HTTP client request header field. tab uri delimiter This option turns on the use of the tab character (0x09) as a delimiter for a URI. multi-slash. if we have the following rule set: alert tcp any any -> any 80 ( msg:"content". It’s important to note that if this option is used without any uricontent rules. because it doesn’t alert on directory traversals that stay within the web server directory structure. 33.0\r\n\r\n No alert will be generated when inspect uri only is enabled.). max headers <positive integer up to 1024> This option takes an integer as an argument. This alert is off by default. Specifying a value of 0 is treated as disabling the alert. then no inspection will take place. HTTP Inspect will attempt to normalize these back into 8-bit encoding. max header length <positive integer up to 65535> This option takes an integer as an argument. directory. So if you need extra performance. 36. 35. a tab is treated as whitespace if a space character (0x20) precedes it. ”utf-16be”.32. This is obvious since the URI is only inspected with uricontent rules.htm http/1. IIS does not. Whether this option is on or not. Specifying a value of 0 is treated as disabling the alert. It only alerts when the directory traversals go past the web server root directory. and if there are none available. directory. a tab in the URI should be treated as any other character. then there is nothing to inspect. This generates much fewer false positives than the directory option. you’ll catch most of the attacks. normalize utf This option turns on normalization of HTTP response bodies where the Content-Type header lists the character set as ”utf-16le”. It is useful for normalizing Referrer URIs that may appear in the HTTP Header. 38. specify an integer argument to max headers of 1 to 1024. The config option. line feed or carriage return. It will also mark the command.2. SMTP handles stateless and stateful processing. 66 . tabs. It saves state between individual packets. and TLS data. However maintaining correct state is dependent on the reassembly of the client side of the stream (ie.7 SMTP Preprocessor The SMTP preprocessor is an SMTP decoder for user applications.1.40. Given a data buffer. The list should be enclosed within braces and delimited by spaces. http_methods { PUT CONNECT } ! △NOTE Please note the maximum length for a method name is 7 Examples preprocessor http_inspect_server: \ server 10. a loss of coherent stream data results in a loss of state).1. data header data body sections. http methods {cmd[cmd]} This specifies additional HTTP Request Methods outside of those checked by default within the preprocessor (GET and POST). braces and methods also needs to be separated by braces. } This specifies on what ports to check for SMTP data. max response line len <int> Alert if an SMTP response line is longer than this value. max command line len <int> Alert if an SMTP command line is longer than this value. alt max command line len <int> { <cmd> [<cmd>] } Overrides max command line len for specific commands. invalid cmds { <Space-delimited list of commands> } Alert if this command is sent from client side. Space characters are defined as space (ASCII 0x20) or tab (ASCII 0x09). ignore data Ignore data section of mail (except for mail headers) when processing rules. Normalization checks for more than one space character after a command. ports { <port> [<port>] . Typically. TLS-encrypted traffic can be ignored. 5. 10. this is relatively safe to do and can improve the performance of data inspection. ignore tls data Ignore TLS-encrypted data when processing rules. all checks all commands none turns off normalization for all commands.. Since so few (none in the current snort rule set) exploits are against mail data. normalize <all | none | cmds> This turns on normalization. such as port and inspection type. 2. valid cmds { <Space-delimited list of commands> } List of valid commands. no alerts Turn off all alerts for this preprocessor. 8. 6.Configuration SMTP has the usual configuration items. RFC 2821 recommends 1024 as a maximum data header line length. . this will include 25 and possibly 465. RFC 2821 recommends 512 as a maximum response line length. In addition. 3. 4. cmds just checks commands listed with the normalize cmds parameter. Also. for encrypted SMTP.. Default is an empty list. which improves performance. 11. Absence of this option or a ”0” means never alert on response line length. Absence of this option or a ”0” means never alert on command line length. Absence of this option or a ”0” means never alert on data header line length. inspection type <stateful | stateless> Indicate whether to operate in stateful or stateless mode. 7. The configuration options are described below: 1. SMTP command lines can be normalized to remove extraneous spaces. Default is an empty list. max header line len <int> Alert if an SMTP DATA header line is longer than this value. 9. We do not alert on commands in this list. RFC 2821 recommends 512 as a maximum command line length. regular mail data can be ignored for an additional performance boost. 18.5. normalize cmds { <Space-delimited list of commands> } Normalize this list of commands Default is { RCPT VRFY EXPN }. The option take values ranging from 5 to 20480 bytes. Default is enable. alert unknown cmds Alert if we don’t recognize command. The decoding of base64 encoded attachments/data ends when either the max mime depth or maximum MIME sessions (calculated using max mime depth and max mime mem) is reached or when the encoded data ends. enable mime decoding Enables Base64 decoding of Mime attachments/data. This is useful when specifying the max mime depth and max mime mem in default policy without turning on the SMTP preprocessor. This not normally printed out with the configuration because it can print so much data. The default value for this option is 838860. When stateful inspection is turned on the base64 encoded MIME attachments/data across multiple packets are decoded too. Note: It is suggested to set this value such that the max mime session calculated as follows is atleast 1. print cmds List all commands understood by the preprocessor. 14. The decoded data is available for detection using the rule option file data:mime.13. Drop if alerted. xlink2state { enable | disable [drop] } Enable/disable xlink2state alert. 17. See 3. Multiple base64 encoded MIME attachments/data in one packet are pipelined. 20. 19. max mime depth <int> Specifies the maximum number of base64 encoded data to decode per SMTP session. This value can be set from 3276 bytes to 100MB. max mime session = max mime mem /(max mime depth + max decoded bytes) max decoded bytes = (max mime depth/4)*3 Also note that these values for max mime mem and max mime depth need to be same across all policy. Default is off. The default value for this in snort in 1460 bytes. max mime mem <int> This option determines (in bytes) the maximum amount of memory the SMTP preprocessor will use for decoding base64 encode MIME attachments/data. 16. . 15. disabled Disables the SMTP preprocessor in a policy.24 rule option for more details. This option along with max mime depth determines the base64 encoded MIME/SMTP sessions that will be decoded at any given instant. Users can configure individual FTP servers and clients with a variety of options. and FTP Server. meaning it only looks for information on a packet-bypacket basis. This argument specifies whether the user wants the configuration option to generate a ftptelnet alert or not. which should allow the user to emulate any type of FTP server or FTP Client. } \ } \ HELO ETRN } \ VRFY } 2.2. The following example gives the generic global configuration format: 69 . FTP Client. Within the code. FTP/Telnet has the capability to handle stateless processing. there are four areas of configuration: Global. FTP/Telnet has a very “rich” user configuration. Global Configuration The global configuration deals with configuration options that determine the global functioning of FTP/Telnet. FTP/Telnet works on both client requests and server responses. while the yes/no argument applies to the alerting functionality associated with that option. For the preprocessor configuration.2. respectively. meaning it looks for information and handles reassembled data correctly. the preprocessor actually maps RCPT and MAIL to the correct command name. identifying FTP commands and responses and Telnet escape sequences and normalize the fields.8 FTP/Telnet Preprocessor FTP/Telnet is an improvement to the Telnet decoder and provides stateful inspection capability for both FTP and Telnet data streams. The presence of the option indicates the option itself is on. similar to that of HTTP Inspect (See 2. The default is to run FTP/Telnet in stateful inspection mode. ! △NOTE Some configuration options have an argument of yes or no. they are referred to as RCPT and MAIL. Within FTP/Telnet.6). Telnet. FTP/Telnet will decode the stream. check encrypted Instructs the preprocessor to continue to check an encrypted session for a subsequent command to cease encryption.. you’ll get an error if you try otherwise. and subsequent instances will override previously set values. inspection type This indicates whether to operate in stateful or stateless. 2. checks for encrypted traffic will occur on every packet. 70 . Configuration 1. whereas in stateful mode. ! △NOTE When inspection type is in stateless mode. a particular session will be noted as encrypted and not inspected any further. The FTP/Telnet global configuration must appear before the other three areas of configuration. SSH tunnels cannot be decoded..1. Example Default FTP Server Configuration preprocessor ftp_telnet_protocol: \ ftp server default ports { 21 } Refer to 73 for the list of options set in default ftp server configuration. Typically port 23 will be included. The detect anomalies option enables alerting on Telnet SB without the corresponding SE. subnegotiation begins with SB (subnegotiation begin) and must end with an SE (subnegotiation end). This is anomalous behavior which could be an evasion case. certain implementations of Telnet servers will ignore the SB without a corresponding SE. Configuration by IP Address This format is very similar to “default”. so adding port 22 will only yield false positives.1 ports { 21 } ftp_cmds { XPWD XCWD } 71 . Being that FTP uses the Telnet protocol on the control connection. detect anomalies In order to support certain options. >]} This is how the user configures which ports to decode as telnet traffic. 4. Per the Telnet RFC. the telnet decode preprocessor. the only difference being that specific IPs can be configured. Default This configuration supplies the default server configuration for any FTP server that is not individually configured.Configuration 1. Telnet supports subnegotiation. normalize This option tells the preprocessor to normalize the telnet traffic by eliminating the telnet escape sequences.1. Most of your FTP servers will most likely end up using the default configuration. Rules written with ’raw’ content options will ignore the normalized buffer that is created when this option is in use. However. ayt attack thresh < number > This option causes the preprocessor to alert when the number of consecutive telnet Are You There (AYT) commands reaches the number specified. 2. 3. It is only applicable when the mode is stateful. Example IP specific FTP Server Configuration preprocessor _telnet_protocol: \ ftp server 10.. ports {<port> [<port>< . it is also susceptible to this behavior. It functions similarly to its predecessor.>< . 7. so the appropriate configuration would be: alt_max_param_len 16 { USER } 6. where: n Number C Character [] optional format enclosed | OR {} choice of options . per RFC 2428 One of choices enclosed within. For example: ftp_cmds { XPWD XCWD XCUP XMKD XRMD } 4. chk str fmt {cmd[cmd]} This option causes a check for string format attacks in the specified commands.FTP Server Configuration Options 1. 3. this option causes the preprocessor to print the configuration for each of the FTP commands for this server. as well as any additional commands as needed.. It can be used as a basic buffer overflow detection. + . 5. Typically port 21 will be included. one of <chars> Parameter follows format specified. fmt must be enclosed in <>’s and may contain the following: Value int number char <chars> date <datefmt> Description Parameter must be an integer Parameter must be an integer between 1 and 255 Parameter must be a single character. | {}. per RFC 959 Parameter must be a long host port specified. >]} This is how the user configures which ports to decode as FTP command channel traffic. This may be used to allow the use of the ’X’ commands identified in RFC 775. print cmds During initialization. per RFC 1639 Parameter must be an extended host port specified. This option specifies a list of additional commands allowed by this server. 2. For example the USER command – usernames may be no longer than 16 bytes. It can be used as a more specific buffer overflow detection.). separated by | One of the choices enclosed within {}. ftp cmds {cmd[cmd]} The preprocessor is configured to alert when it sees an FTP command that is not allowed by the server. def max param len <number> This specifies the default maximum allowed parameter length for an FTP command. cmd validity cmd < fmt > This option specifies the valid format for parameters of a given command.literal Parameter is a string (effectively unrestricted) Parameter must be a host/port specified.. optional value enclosed within [] 72 string host port long host port extended host port {}. txt) To check validity for a server that uses the TZ format.Examples of the cmd validity option are shown below. FTP Server Base Configuration Options The base FTP server configuration is as follows. telnet cmds <yes|no> This option turns on detection and alerting when telnet escape sequences are seen on the FTP command channel.. it is recommended that this option not be used. Use of the ”data chan” option is deprecated in favor of the ”ignore data chan” option. Injection of telnet escape sequences could be used as an evasion attempt on an FTP command channel. it is recommended that this option not be used. cmd_validity MDTM < [ date nnnnnnnnnnnnnn[. data chan This option causes the rest of snort (rules.uuu]. If your rule set includes virus-type rules. certain FTP servers accept MDTM commands that set the modification time on a file. Options specified in the configuration file will modify this set of options. accept a format using YYYYMMDDHHmmss[. ”data chan” will be removed in a future release. especially with large file transfers from a trusted source. While not part of an established standard. including mode Z which allows for # zip-style compression. 73 . These examples are the default checks. 10. cmd_validity MODE < char ASBCZ > # Allow for a date in the MDTM command. per RFC 959 and others performed by the preprocessor. 11. use the following: cmd_validity MDTM < [ date nnnnnnnnnnnnnn[{+|-}n[n]] ] string > 8.ietf.org/internetdrafts/draft-ietf-ftpext-mlst-16. Some others accept a format using YYYYMMDDHHmmss[+—]TZ format. It can be used to improve performance. other preprocessors) to ignore FTP data channel connections. 9. The other options will override those in the base configuration. # This allows additional modes. The most common among servers that do.n[n[n]]] ] string > MDTM is an off case that is worth discussing. FTP commands are added to the set of allowed commands. especially with large file transfers from a trusted source. It can be used to improve performance. Setting this option to ”yes” means that NO INSPECTION other than TCP state will be performed on FTP data transfers. ignore data chan <yes|no> This option causes the rest of Snort (rules. The example above is for the first case (time format as specified in. If your rule set includes virus-type rules. For instance. Obsolete Record Types. The available configuration options are described below. DNS looks at DNS Response traffic over UDP and TCP and it requires Stream preprocessor to be enabled for TCP decoding. enable rdata overflow Check for DNS Client RData TXT Overflow 77 .. if the presumed server generates client traffic.conf Looks for attacks on SSH server port 22. If Challenge-Response Overflow or CRC 32 false positive. the preprocessor will stop processing traffic for a given session. ports {<port> [<port>< . enable experimental types Alert on Experimental (per RFC 1035) Record Types 4. or if a client generates server traffic.. 2. all alerts are disabled and the preprocessor checks traffic on port 53. Example Configuration from snort. 1. Configuration By default. enable paysize Enables alerts for invalid payload sizes. try increasing the number of required client bytes with max client bytes. enable protomismatch Enables checking for the Protocol Mismatch exploit.9. The SSH preprocessor should work by default. preprocessor ssh: \ server_ports { 22 } \ max_client_bytes 19600 \ max_encrypted_packets 20 \ enable_respoverflow \ enable_ssh1crc32 2.2.10 DNS The DNS preprocessor decodes DNS Responses and can detect the following exploits: DNS Client RData Overflow. After max encrypted packets is reached. enable recognition Enable alerts for non-SSH traffic on SSH ports. enable obsolete types Alert on Obsolete (per RFC 1035) Record Types 3. 12. enable badmsgdir Enable alerts for traffic flowing the wrong direction. 11. >]} This option specifies the source ports that the DNS preprocessor should inspect traffic. and Experimental Record Types. 10. Alerts at 19600 unacknowledged bytes within 20 encrypted packets for the Challenge-Response Overflow/CRC32 exploits. especially when packets may be missed. noinspect encrypted Disable inspection on traffic that is encrypted. By enabling the SSLPP to inspect port 443 and enabling the noinspect encrypted option. SSL is used over port 443 as HTTPS. >]} This option specifies which ports SSLPP will inspect traffic on. trustservers Disables the requirement that application (encrypted) data must be observed on both sides of the session before a session is marked encrypted. the only observed response from one endpoint will be TCP ACKs. Configuration 1. This requires the noinspect encrypted option to be useful. and it will cease operation on a session if it loses state because of missing data (dropped packets). Examples/Default Configuration from snort. Verifying that faultless encrypted traffic is sent from both endpoints ensures two things: the last client-side handshake packet was not crafted to evade Snort. the session is not marked as encrypted.2. if a user knows that server-side encrypted data can be trusted to mark the session as encrypted. only the SSL handshake of each connection will be inspected.conf Looks for traffic on DNS server port 53. Do not alert on obsolete or experimental RData record types. The SSL Dynamic Preprocessor (SSLPP) decodes SSL and TLS traffic and optionally determines if and when Snort should stop inspection of it. Default is off.The DNS preprocessor does nothing if none of the 3 vulnerabilities it checks for are enabled.. If one side responds with an indication that something has failed. 78 . Check for the DNS Client RData overflow vulnerability. 3. such as the handshake. SSLPP looks for a handshake followed by encrypted traffic traveling to both sides. Use this option for slightly better performance if you trust that your servers are not compromised. no further inspection of the data on the connection is made. By default. preprocessor dns: \ ports { 53 } \ enable_rdata_overflow 2. It will not operate on TCP sessions picked up midstream. In some cases.. Default is off.11 SSL/TLS Encrypted traffic should be ignored by Snort for both performance reasons and to reduce false positives. ports {<port> [<port>< . Therefore. documented below.. By default. and that the traffic is legitimately encrypted. Typically. Once the traffic is determined to be encrypted. the user should use the ’trustservers’ option. tls1. The option will match if any one of the OR’ed versions are used in the SSL connection. and more than one identifier can be specified. To check for two or more SSL versions in use simultaneously.1. multiple ssl version rule options should be used. ssl state The ssl state rule option tracks the state of the SSL encryption during the process of hello and key exchange. multiple rules using the ssl state rule option should be used.2. Syntax ssl_state: <state-list> state-list = state | state . state-list state = ["!"] "client_hello" | "server_hello" | "client_keyx" | "server_keyx" | "unknown" Examples ssl_state:client_hello. Lists of identifiers are OR’ed together.2" Examples ssl_version:sslv3.1" | "tls1. ssl_version:!sslv2. via a comma separated list.server_keyx. ssl_state:!server_hello. ssl_version:tls1. via a comma separated list. The option will match if the connection is currently in any one of the OR’ed states. Syntax ssl_version: <version-list> version-list = version | version .tls1. The list of version identifiers are below.0. and are OR’ed together. To ensure the connection has reached each of a set of states. More than one state can be specified.0" | "tls1. ssl_state:client_keyx.Examples/Default Configuration from snort. version-list version = ["!"] "sslv2" | "sslv3" | "tls1. 79 . The list of states are below. the preprocessor inspects Ethernet addresses and the addresses in the ARP packets.168. preprocessor arpspoof preprocessor arpspoof_detect_host: 192. Write Block Raw. unicast ARP requests. Specify one host IP MAC combo per line. reduce false positives and reduce the count and complexity of DCE/RPC based rules. Transaction Secondary. preprocessor arpspoof: -unicast preprocessor arpspoof_detect_host: 192. The preprocessor merely looks for Ethernet address inconsistencies. Specify a pair of IP and hardware address as the argument to arpspoof detect host.40.2.40. The following transports are supported for DCE/RPC: SMB.2. Write and Close. Transaction.40. Read Block Raw and Read AndX. SMB desegmentation is performed for the following commands that can be used to transport DCE/RPC requests and responses: Write.1 and 192.168.2 f0:0f:00:f0:0f:01 2.1 f0:0f:00:f0:0f:00 preprocessor arpspoof_detect_host: 192.40. When no arguments are specified to arpspoof.168.13 DCE/RPC 2 Preprocessor The main purpose of the preprocessor is to perform SMB desegmentation and DCE/RPC defragmentation to avoid rule evasion using these techniques. UDP and RPC over HTTP v. 80 .12 ARP Spoof Preprocessor The ARP spoof preprocessor decodes ARP packets and detects ARP attacks. The Ethernet address corresponding to the preceding IP. Format preprocessor arpspoof[: -unicast] preprocessor arpspoof_detect_host: ip mac Option ip mac Description IP address.1 proxy and server. The host with the IP address should be on the same layer 2 segment as Snort is.1 f0:0f:00:f0:0f:00 preprocessor arpspoof_detect_host: 192.168. TCP. Write AndX. When inconsistency occurs. and inconsistent Ethernet to IP mapping.2 f0:0f:00:f0:0f:01 The third example configuration has unicast detection enabled. preprocessor arpspoof The next example configuration does not do unicast detection but monitors ARP mapping for hosts 192.168. When ”-unicast” is specified as the argument of arpspoof.40. Alert SID 4 is used in this case. the preprocessor checks for unicast ARP requests.2.2. an alert with GID 112 and SID 2 or 3 is generated. Read. The preprocessor will use this list when detecting ARP cache overwrite attacks. Example Configuration The first example configuration does neither unicast detection nor ARP mapping monitoring.40. An alert with GID 112 and SID 1 will be generated if a unicast ARP request is detected.168. New rule options have been implemented to improve performance. stream5. Windows 2003 Windows XP Windows Vista These Windows versions require strict binding between the UID. the frag3 preprocessor should be enabled and configured. the dcerpc2 preprocessor will enable stream reassembly for that session if necessary. If the TID used to create the FID is deleted (via a tree disconnect). Windows 2000 Windows 2000 is interesting in that the first request to a named pipe must use the same binding as that of the other Windows versions. However.0.0. along with a valid FID can be used to make a request. deleting either the UID or TID invalidates the FID.e. If it is decided that a session is SMB or DCE/RPC. no binding. Samba (all versions) Under an IPC$ tree. Therefore. if the TID used in creating the FID is deleted (via a tree disconnect). requests after that follow the same binding as Samba 3. However. servers or autodetecting. i.e.0.e.Dependency Requirements For proper functioning of the preprocessor: • Stream session tracking must be enabled. TID and FID used to make a request to a named pipe instance. the FID becomes invalid.e. Samba 3. If the UID used to create the named pipe instance is deleted (via a Logoff AndX). share handle or TID and file/named pipe handle or FID must be used to write data to a named pipe. along with a valid FID can be used to make a request. i. no more requests can be written to that named pipe instance. since it is necessary in making a request to the named pipe. • Stream reassembly must be performed for TCP sessions. Target Based There are enough important differences between Windows and Samba versions that a target based approach has been implemented. either through configured ports.22 in that deleting the UID or TID used to create the named pipe instance also invalidates it. • IP defragmentation should be enabled. Both the UID and TID used to open the named pipe instance must be used when writing data to the same named pipe instance.22 Any valid TID. the FID that was created using this TID becomes invalid. the FID that was created using this TID becomes invalid. i. Samba greater than 3. does not accept: 81 . no more requests can be written to that named pipe instance. however.0. i.22 and earlier. The preprocessor requires a session tracker to keep its data. Accepted SMB commands Samba in particular does not recognize certain commands under an IPC$ tree.22 and earlier Any valid UID and TID. Some important differences: Named pipe instance tracking A combination of valid login handle or UID. It also follows Samba greater than 3.e. only the UID used in opening the named pipe can be used to make a request using the FID handle to the named pipe instance. i. The binding between these is dependent on OS/software version. An evasion possibility would be accepting a fragment in a request that the server won’t accept that gets sandwiched between an exploit. Multiple Bind Requests A Bind request is the first request that must be made in a connection-oriented DCE/RPC session in order to specify the interface/interfaces that one wants to communicate with. DCE/RPC Fragmented requests . Transaction tracking The differences between a Transaction request and using one of the Write* commands to write data to a named pipe are that (1) a Transaction performs the operations of a write and a read from the named pipe.20 and earlier Any amount of Bind requests can be made. multiple logins and tree connects (only one place to return handles for these). we don’t want to keep track of data that the server won’t accept. Samba later than 3. whereas in using the Write* commands. on the other hand.e.20 Another Bind request can be made if the first failed and no interfaces were successfully bound to. i. data is written to the named pipe as it is received by the server. only one Bind can ever be made on a session whether or not it succeeds or fails. Ultimately. These requests can also be segmented with Transaction Secondary commands. having the same FID) are fields in the SMB header representing a process id (PID) and multiplex id (MID).Context ID 82 .0. The PID represents the process this request is a part of.Open Write And Close Read Read Block Raw Write Block Raw Windows (all versions) Accepts all of the above commands under an IPC$ tree. An MID represents different sub-processes within a process (or under a PID). It is necessary to track this so as not to munge these requests together (which would be a potential evasion opportunity). all previous interface bindings are invalidated. What distinguishes them (when the same named pipe is being written to.g. Any binding after that must use the Alter Context request. Windows (all versions) Uses a combination of PID and MID to define a ”thread”.0. Samba (all versions) Uses just the MID to define a ”thread”. Samba 3. Windows (all versions) For all of the Windows versions. e. If another Bind is made. Segments for each ”thread” are stored separately and written to the named pipe when all segments are received. login/logoff and tree connect/tree disconnect. all previous interface bindings are invalidated. If a Bind after a successful Bind is made. AndX command chaining Windows is very strict in what command combinations it allows to be chained. the client has to explicitly send one of the Read* requests to tell the server to send the response and (2) a Transaction request is not written to the named pipe until all of the data is received (via potential Transaction Secondary requests) whereas with the Write* commands. Multiple Transaction requests can be made simultaneously to the same named pipe. Samba. is very lax and allows some nonsensical combinations. Samba (all versions) The byte order of the stub data is that which is used in the request carrying the stub data.. Windows Vista The opnum that is ultimately used for the request is contained in the first fragment. The context id field in any other fragment can contain any value. Samba (all versions) Windows 2000 Windows 2003 Windows XP The opnum that is ultimately used for the request is contained in the last fragment.Each fragment in a fragmented request carries the context id of the bound interface it wants to make the request to. Samba (all versions) The context id that is ultimately used for the request is contained in the last fragment. The opnum .Operation number Each fragment in a fragmented request carries an operation number (opnum) which is more or less a handle to a function offered by the interface. Windows (all versions) The byte order of the stub data is that which was used in the Bind request. The context id field in any other fragment can contain any value. Global Configuration preprocessor dcerpc2 The global dcerpc2 configuration is required. DCE/RPC Fragmented requests . The global preprocessor configuration name is dcerpc2 and the server preprocessor configuration name is dcerpc2 server. DCE/RPC Stub data byte order The byte order of the stub data is determined differently for Windows and Samba. Windows (all versions) The context id that is ultimately used for the request is contained in the first fragment. The opnum field in any other fragment can contain any value. ) memcap Only one event. A value of 0 supplied as an argument to this option will. This option is useful in inline mode so as to potentially catch an exploit early before full defragmentation is done. smb. co] events [memcap. Default is set to -1. cl Stands for connectionless DCE/RPC. in effect. co. Default is 100 MB. Alert on events related to connection-oriented DCE/RPC processing. When the preprocessor is disabled only the memcap option is applied when specified with the configuration. By default this value is turned off. it is truncated before being added to the defragmentation module. events Specifies the classes of events to enable. If the memcap is reached or exceeded. disable defrag Tells the preprocessor not to do DCE/RPC defragmentation. Run-time memory includes any memory allocated after configuration. Option examples memcap 30000 max_frag_len 16840 events none events all events smb events co events [co] events [smb. If a fragment is greater than this size. co Stands for connection-oriented DCE/RPC. Default is disabled. disable this option. max frag len Specifies the maximum fragment size that will be added to the defragmention module. reassemble threshold Specifies a minimum number of bytes in the DCE/RPC desegmentation and defragmentation buffers before creating a reassembly packet to send to the detection engine. cl] reassemble_threshold 500 84 . The allowed range for this option is 1514 . alert.65535. smb Alert on events related to SMB processing. Alert on events related to connectionless DCE/RPC processing.’ event-list "memcap" | "smb" | "co" | "cl" 0-65535 Option explanations memcap Specifies the maximum amount of run-time memory that can be allocated.memcap max-frag-len events pseudo-event event-list event re-thresh = = = = = = = 1024-4194303 (kilobytes) 1514-65535 pseudo-event | event | ’[’ event-list ’]’ "none" | "all" event | event ’. Default is to do defragmentation. disabled Disables the preprocessor. (See Events section for an enumeration and explanation of events.. When processing DCE/RPC traffic.conf CANNOT be used. the defaults will be used. the default configuration is used if no net configurations match.0.’. A dcerpc2 server configuration must start with default or net options. Zero or more net configurations can be specified. At most one default configuration can be specified.20" "none" | detect-opt | ’[’ detect-list ’]’ detect-opt | detect-opt ’. rpc-over-http-server 593] autodetect [tcp 1025:. default values will be used for the default configuration. udp 135. tcp 135. smb. Option syntax Option default net policy detect Argument NONE <net> <policy> <detect> Required YES YES NO NO Default NONE NONE policy WinXP detect [smb [139.’ port-list port | port-range ’:’ port | port ’:’ | port ’:’ port 0-65535 85 .22" | "Samba-3. A net configuration matches if the packet’s server IP address matches an IP address or net specified in the net configuration. The default and net options are mutually exclusive. events [memcap.’ detect-list transport | transport port-item | transport ’[’ port-list ’]’ "smb" | "tcp" | "udp" | "rpc-over-http-proxy" | "rpc-over-http-server" port-item | port-item ’. it will override the default configuration. For any dcerpc2 server configuration. If a net configuration matches. events smb memcap 50000. If no default configuration is specified. The net option supports IPv6 addresses. co. if non-required options are not specified. Note that port and ip variables defined in snort. cl]. max_frag_len 14440 disable_defrag. udp 1025:.0. events [memcap.445]. memcap 300000. Default is ”WinXP”.. RPC over HTTP server. This option is useful if the RPC over HTTP proxy configured with the detect option is only used to proxy DCE/RPC traffic.TCP/UDP. Note that most dynamic DCE/RPC ports are above 1024 and ride directly over TCP or UDP. autodetect Specifies the DCE/RPC transport and server ports that the preprocessor should attempt to autodetect on for the transport. A value of 0 disables this option. The configuration will only apply to the IP addresses and nets supplied as an argument. smb max chain Specifies the maximum amount of AndX command chaining that is allowed before an alert is generated. Default is empty. This value can be set from 0 to 255. RPC over HTTP proxy and lastly SMB. detect Specifies the DCE/RPC transport and server ports that should be detected on for the transport.’ share-list word | ’"’ word ’"’ | ’"’ var-word ’"’ graphical ASCII characters except ’. Option examples 86 .’ ’"’ ’]’ ’[’ ’$’ graphical ASCII characters except ’. policy Specifies the target-based policy to use when processing. Option explanations default Specifies that this configuration is for the default server configuration. shares with ’$’ must be enclosed quotes. This is because the proxy is likely a web server and the preprocessor should not look at all web traffic.’ ’"’ ’]’ ’[’ 0-255 Because the Snort main parser treats ’$’ as the start of a variable and tries to expand it. The autodetect ports are only queried if no detect transport/ports match the packet. It would be very uncommon to see SMB on anything other than ports 139 and 445. Defaults are ports 139 and 445 for SMB. Default is to autodetect on RPC over HTTP proxy detect ports. 593 for RPC over HTTP server and 80 for RPC over HTTP proxy. 135 for TCP and UDP. no autodetect http proxy ports By default. the preprocessor will always attempt to autodetect for ports specified in the detect configuration for rpc-over-http-proxy. Defaults are 1025-65535 for TCP. Default maximum is 3 chained commands. The order in which the preprocessor will attempt to autodetect will be .shares share-list share word var-word max-chain = = = = = = share | ’[’ share-list ’]’ share | share ’. UDP and RPC over HTTP server. rpc-over-http-server [1025:6001. policy WinXP. udp] autodetect [tcp 2025:.6005:]] smb_invalid_shares private smb_invalid_shares "private" smb_invalid_shares "C$" smb_invalid_shares [private.10.10.0/24.4. policy WinVista. autodetect [tcp.4.445]. \ autodetect [tcp 1025:. \ smb_invalid_shares ["C$". policy Samba.10.168.0.168. smb_max_chain 1 preprocessor dcerpc2_server: \ net [10. "C$"] smb_max_chain 1 Configuration examples preprocessor dcerpc2_server: \ default preprocessor dcerpc2_server: \ default. rpc-over-http-server [593. policy Win2000 preprocessor dcerpc2_server: \ default. detect [smb.4.10. no_autodetect_http_proxy_ports preprocessor dcerpc2_server: \ net [10.net 192. udp 2025:] autodetect [tcp 2025:.feab:45b3::/126]. tcp [135.56.11.255.0. udp.6002:6004]] autodetect none autodetect tcp autodetect [tcp] autodetect tcp 2025: autodetect [tcp 2025:] autodetect tcp [2025:3001. smb_max_chain 3 Complete dcerpc2 default configuration 87 .0/24] net 192. feab:45b3:ab92:8ac4:d322:007f:e5aa:7845] policy Win2000 policy Samba-3.445]] detect [smb. tcp 135.0/255.445] detect [smb [139.2103]] detect [smb [139.0/24.0. udp 1025:.0. policy Win2000 preprocessor dcerpc2_server: \ net [10. tcp] detect [smb 139. "D$". "C$"] smb_invalid_shares ["private".4.0/24.11.22 detect none detect smb detect [smb] detect smb 445 detect [smb 445] detect smb [139.0/24. rpc-over-http-server 593].10.57].255. tcp 135. udp 135. policy WinVista.445]. rpc-over-http-proxy [1025:6001. "ADMIN$"] preprocessor dcerpc2_server: net 10. autodetect none Default server configuration preprocessor dcerpc2_server: default. "ADMIN$"].0. rpc-over-http-server 1025:]. autodetect tcp 1025:. \ smb_invalid_shares ["C$".10 net 192. \ detect [smb [139.3003:] autodetect [tcp [2025:3001. udp 135.feab:45b3::/126]. policy Win2000. tcp. rpc-over-http-proxy 8081].3003:]] autodetect [tcp. tcp].0.0.168.6005:]]. detect smb. feab:45b3::/32] net [192. \ detect [smb.4.168.168.0/24 net [192.168. tcp 135. If this field is zero. the preprocessor will alert. The preprocessor will alert if the remaining NetBIOS packet length is less than the size of the SMB command data size specified in the command header. Some commands. Many SMB commands have a field containing an offset from the beginning of the SMB header to where the data the command is carrying starts. smb_max_chain 3 Events The preprocessor uses GID 133 to register events. \ detect [smb [139.preprocessor dcerpc2: memcap 102400 preprocessor dcerpc2_server: \ default. preprocessor will alert. The preprocessor will alert if the total data count specified in the SMB command header is less than the data size specified in the SMB command header. Negative Response (only from server). If a command requires this and the byte count is less than the minimum required byte count for that command. The word count of the command header is invalid. especially the commands from the SMB Core implementation require a data format field that specifies the kind of data that will be coming next. The SMB id does not equal \xffSMB. SMB events SID 2 Description An invalid NetBIOS Session Service type was specified in the header. Request (only from client). have a field containing the total amount of data to be transmitted. the preprocessor will alert. Note that since the preprocessor does not yet support SMB2. Positive Response (only from server). rpc-over-http-server 593]. (Total data count must always be greater than or equal to current data size. such as Transaction. If this offset puts us before data that has already been processed or after the end of payload. the preprocessor will alert. id of \xfeSMB is turned away before an eventable point is reached. Memcap events SID 1 Description If the memory cap is reached and the preprocessor is configured to alert. An SMB message type was specified in the header. Some SMB commands. udp 135. Valid types are: Message. policy WinXP.) 3 4 5 6 7 8 9 10 11 12 13 14 88 . The preprocessor will alert if the remaining NetBIOS packet length is less than the size of the SMB command byte count specified in the command header. The preprocessor will alert if the format is not that which is expected for that command.445]. Some commands require a minimum number of bytes after the command header. \ autodetect [tcp 1025:. Either a request was made by the server or a response was given by the client. rpc-over-http-server 1025:]. The preprocessor will alert if the NetBIOS Session Service length field contains a value less than the size of an SMB header. Retarget Response (only from server) and Keep Alive. Some commands require a specific format for the data. udp 1025:. An Open AndX or Nt Create AndX command is used to open/create a file or named pipe. essentially connects to a share and disconnects from the same share in the same request and is anomalous behavior. A Tree Connect AndX command is used to connect to a share. so it need to be queued with the request and dequeued with the response. When a Session Setup AndX request is sent to the server. After a client is done writing data using the Write* commands.). The preprocessor will alert if the byte count minus a predetermined amount based on the SMB command is not equal to the data size. (The preprocessor is only interested in named pipes as this is where DCE/RPC requests are written to. With AndX command chaining it is possible to chain multiple Session Setup AndX commands within the same request. however Samba does. Windows does not allow this behavior. The Read* request contains the file id associated with a named pipe instance that the preprocessor will ultimately send the data to.) The Close command is used to close that file or named pipe. essentially opens and closes the named pipe in the same request and is anomalous behavior. however. With AndX command chaining it is possible to chain multiple Tree Connect AndX commands within the same request.) Some of the Core Protocol commands (from the initial SMB implementation) require that the byte count be some value greater than the data size exactly. A Logoff AndX request is sent by the client to indicate it wants to end the session and invalidate the login handle. This is anomalous behavior and the preprocessor will alert if it happens. This is anomalous behavior and the preprocessor will alert if it happens. There is. Unlike the Tree Connect AndX response. The preprocessor will alert if it sees this. the login handle returned by the server is used for the subsequent chained commands. essentially logins in and logs off in the same request and is anomalous behavior. This is used by the client in subsequent requests to indicate that it has authenticated. There should be under normal circumstances no more than a few pending tree connects at a time and the preprocessor will alert if this number is excessive. The combination of a Tree Connect AndX command with a chained Tree Disconnect command. Windows does not allow this behavior. however. The combination of a Session Setup AndX command with a chained Logoff AndX command. In this case the preprocessor is concerned with the server response. only one place in the SMB header to return a tree handle (or Tid). only one place in the SMB header to return a login handle (or Uid). there is no indication in the Tree Connect response as to whether the share is IPC or not. they are responded to in the order they were sent. (The byte count must always be greater than or equal to the data size. however. It looks for a Tree Connect or Tree Connect AndX to the share. it issues a Read* command to the server to tell it to send a response to the data it has written. however Samba does. the server responds (if the client successfully authenticates) which a user id or login handle. With commands that are chained after a Session Setup AndX request. The preprocessor will alert if it sees any of the invalid SMB shares configured. 89 . The combination of a Open AndX or Nt Create AndX command with a chained Close command. does not contain this file id. There is. The server response. The preprocessor will alert if it sees this. There should be under normal circumstances no more than a few pending Read* requests at a time and the preprocessor will alert if this number is excessive.15 16 17 18 19 20 21 22 23 24 25 26 The preprocessor will alert if the total amount of data sent in a transaction is greater than the total data count specified in the SMB command header. The Tree Disconnect command is used to disconnect from that share. The preprocessor will alert if it sees this. The preprocessor will alert if the number of chained commands in a single request is greater than or equal to the configured amount (default is 3). If multiple Read* requests are sent to the server. The preprocessor will alert if the byte count specified in the SMB command header is less than the data size specified in the SMB command. Rule Options New rule options are supported by enabling the dcerpc2 preprocessor: dce_iface dce_opnum 90 . The byte order of the request data is determined by the Bind in connection-oriented DCE/RPC for Windows. wrapping the sequence number space produces strange behavior from the server. this number should stay the same for all fragments. The preprocessor will alert if in a Bind or Alter Context request.. so this should be considered anomalous behavior. The call id for a set of fragments in a fragmented request should stay the same (it is incremented for each complete request). there are no transfer syntaxes to go with the requested interface. The preprocessor will alert if the connectionless DCE/RPC PDU type is not a valid PDU type. The preprocessor will alert if a non-last fragment is less than the size of the negotiated maximum fragment length. The preprocessor will alert if the remaining fragment length is less than the remaining packet size. The preprocessor will alert if it changes in a fragment mid-request. If a request is fragmented. The context id is a handle to a interface that was bound to. In testing. The preprocessor will alert if the opnum changes in a fragment mid-request. The preprocessor will alert if a fragment is larger than the maximum negotiated fragment length.. The preprocessor will alert if the context id changes in a fragment mid-request. If a request if fragmented. 35 36 37 38 39 Connectionless DCE/RPC events SID 40 41 42 43 Description The preprocessor will alert if the connectionless DCE/RPC major version is not equal to 4.Connection-oriented DCE/RPC events SID 27 28 29 30 31 32 33 34 Description The preprocessor will alert if the connection-oriented DCE/RPC major version contained in the header is not equal to 5. The preprocessor will alert if in a Bind or Alter Context request. there are no context items specified. The operation number specifies which function the request is calling on the bound interface. It is anomalous behavior to attempt to change the byte order mid-session. The preprocessor will alert if the packet data length is less than the size of the connectionless header. The preprocessor will alert if the fragment length defined in the header is less than the size of the header. this number should stay the same for all fragments. For each Bind and Alter Context request. Flags (and a field in the connectionless header) are set in the DCE/RPC header to indicate whether the fragment is the first. a DCE/RPC request can be broken up into 1 or more fragments. will be looking at the wrong data on a fragment other than the first. A rule which is looking for data. any_frag. This can eliminate false positives where more than one service is bound to successfully since the preprocessor can correlate the bind UUID to the context id used in the request. Many checks for data in the DCE/RPC request are only relevant if the DCE/RPC request is a first fragment (or full request). say 5 bytes into the request (maybe it’s a length field). since subsequent fragments will contain data deeper into the DCE/RPC request. it will specify the context id so the server knows what service the client is making a request to. An interface contains a version. Each interface is represented by a UUID. This option requires tracking client Bind and Alter Context requests as well as server Bind Ack and Alter Context responses for connection-oriented DCE/RPC in the preprocessor. Instead of using flow-bits. i. Optional arguments are an interface version and operator to specify that the version be less than (’<’). specify one or more service interfaces to bind to. The any frag argument says to evaluate for middle and last fragments as well. since the beginning of subsequent fragments are already offset some length from the beginning of the request. <operator><version>][. it can. equal to (’=’) or not equal to (’!’) the version specified. This option is used to specify an interface UUID. dce_iface:4b324fc8-1670-01d3-1278-5a47bf6ee188. Also. dce_iface:4b324fc8-1670-01d3-1278-5a47bf6ee188. Syntax dce_iface:<uuid>[. However. if the any frag option is used to specify evaluating on all fragments. The representation of the interface UUID is different depending on the endianness specified in the DCE/RPC previously requiring two rules .e.one for big endian and one for little endian. The preprocessor eliminates the need for two rules by normalizing the UUID. <2. When a client makes a request. When a client sends a bind request to the server. any_frag]. whether or not the client has bound to a specific interface UUID and whether or not this client request is making a request to it. any_frag. by default the rule will only be evaluated for a first fragment (or full request. dce_iface:4b324fc8-1670-01d3-1278-5a47bf6ee188. It is necessary for a client to bind to a service before being able to make a call to it. This can be a source of false positives in fragmented DCE/RPC traffic. however. a middle or the last fragment. Some versions of an interface may not be vulnerable to a certain exploit. Also. By default it is reasonable to only evaluate if the request is a first fragment (or full request). The server will respond with the interface UUIDs it accepts as valid and will allow the client to make requests to those services.. greater than (’>’). not a fragment) since most rules are written to start at the beginning of a request. =1. the client specifies a list of interface UUIDs along with a handle 91 . a rule can simply ask the preprocessor. A DCE/RPC request can specify whether numbers are represented as big endian or little endian. using this rule option. Each interface UUID is paired with a unique index (or context id) that future requests can use to reference the service that the client is making a call to. Note that a defragmented DCE/RPC request will be considered a full request. The server response indicates which interfaces it will allow the client to make requests to . |05 00 02| will be inserted into the fast pattern matcher and (3) if the flow isn’t known. it will unequivocally be used over the above mentioned patterns.it either accepts or rejects the client’s wish to bind to a certain interface.’ opnum-list opnum | opnum-range opnum ’-’ opnum 0-65535 Examples 92 . If a content in the rule uses the fast pattern rule option. the interface UUID. . For TCP rules. the version operation is. For UDP rules. It is likely that an exploit lies in the particular DCE/RPC function call. dce opnum The opnum represents a specific function call to an interface. (2) if the rule option flow:from server|to client is used.(or context id) for each interface UUID that will be used during the DCE/RPC session to reference the interface.. Syntax dce_opnum:<opnum-list>.dce iface) usually we want to know what function call it is making to that service. |05 00 00| will be inserted into the fast pattern matcher. in both big and little endian format will be inserted into the fast pattern matcher. As an example. opnum-list opnum-item opnum-range opnum = = = = opnum-item | opnum-item ’. ! △NOTE Using this rule option will automatically insert fast pattern contents into the fast pattern matcher. This tracking is required so that when a request is processed. (1) if the rule option flow:to server|from client is used. Note that if the rule already has content rule options in it. the best (meaning longest) pattern will be used. -10. This option takes no arguments. byte test Syntax byte_test:<convert>. convert offset mult_value adjustment_value = = = = 1 | 2 | 4 (only with option "dce") -65535 to 65535 0 .-4.e. string. This option matches if there is DCE/RPC stub data. There are no arguments to this option. dce. hex. dce_opnum:15. relative][. >. regardless of preceding rule options.relative. hex. little. [!]<operator>. the following normal byte test arguments will not be allowed: big. but since the DCE/RPC preprocessor will know the endianness of the request. it will be able to do the correct conversion. little. byte test and byte jump with dce A DCE/RPC request can specify whether numbers are represented in big or little endian.dce_opnum:15. This option is used to specify an opnum (or operation number). The opnum of a DCE/RPC request will be matched against the opnums specified with this option. dce_opnum:15. byte jump Syntax byte_jump:<convert>. this option will alleviate this need and place the cursor at the beginning of the DCE/RPC stub data. 18-20. <offset>[. string.align.post_offset -4. !=. <offset> [. 17. This option matches if any one of the opnums specified match the opnum of the DCE/RPC request. 93 . dec and oct.65535 -65535 to 65535 Example byte_jump:4. opnum range or list containing either or both opnum and/or opnum-range. byte_test:2. align][. When using the dce argument to a byte test. 2280. When using the dce argument to a byte jump. relative.multiplier 2.4294967295 -65535 to 65535 Examples byte_test:4. These rule options will take as a new argument dce and will work basically the same as the normal byte test/byte jump. This option is used to place the cursor (used to walk the packet payload in rules processing) at the beginning of the DCE/RPC stub data. relative]. 20-22. dce. convert operator value offset = = = = 1 | 2 | 4 (only with option "dce") ’<’ | ’=’ | ’>’ | ’&’ | ’ˆ’ 0 . post_offet <adjustment_value>]. This reduces the number of rule option checks and the complexity of the rule. Example dce_stub_data.dce. 35000. the remote procedure call or function call data. 0. <value>. dec. dce. oct and from beginning. dce stub data Since most netbios rules were doing protocol decoding only to get to the DCE/RPC stub data. dce_opnum:15-18. i. multiplier <mult_value>] \ [. relative. the following normal byte jump arguments will not be allowed: big. dce. \ pcre:"/ˆ.dce.{12}(\x00\x00\x00\x00|. This information includes credit card numbers.S.relative.dce.4.align. reference:cve. U.139.2007-1748. \ classtype:attempted-admin. Preprocessor Configuration Sensitive Data configuration is split into two parts: the preprocessor config. \ dce_iface:50abc2a4-574d-40b3-9d66-ee4fd5fba076. dce_stub_data.{12})/sR". reference:bugtraq.23470.-4.Example of rule complexity reduction The following two rules using the new rule options replace 64 (set and isset flowbit) rules that are necessary if the new rule options are not used: alert tcp $EXTERNAL_NET any -> $HOME_NET [135.) alert udp $EXTERNAL_NET any -> $HOME_NET [135.{12})/sR". and the rule options.1024:] \ (msg:"dns R_Dnssrv funcs2 overflow attempt". Dependencies The Stream5 preprocessor must be enabled for the Sensitive Data preprocessor to work. flow:established.14 Sensitive Data Preprocessor The Sensitive Data preprocessor is a Snort module that performs detection and filtering of Personally Identifiable Information (PII).>. reference:cve. \ byte_test:4.1024:] \ (msg:"dns R_Dnssrv funcs2 overflow attempt". \ dce_iface:50abc2a4-574d-40b3-9d66-ee4fd5fba076. The preprocessor config starts with: preprocessor sensitive_data: Option syntax Option alert threshold mask output ssn file alert_threshold = Argument <number> NONE <filename> 1 .dce. A limited regular expression syntax is also included for defining your own PII.2007-1748.to_server.relative. sid:1000068.relative. mask output This option replaces all but the last 4 digits of a detected PII with ”X”s.>. sid:1000069. where an organization’s regulations may prevent them from seeing unencrypted numbers.{12}(\x00\x00\x00\x00|. This option specifies how many need to be detected before alerting.256.2. dce_stub_data. 94 .4.65535 Required NO NO NO Default alert threshold 25 OFF OFF Option explanations alert threshold The preprocessor will alert when any combination of PII are detected in a session. byte_jump:4. Social Security numbers.dce.relative.-4.445. and email addresses.256. byte_jump:4.align. \ byte_test:4.593.to_server. dce_opnum:0-11. flow:established. \ pcre:"/ˆ. \ classtype:attempted-admin. reference:bugtraq.23470. This should be set higher than the highest individual count in your ”sd pattern” rules. This is only done on credit card & Social Security numbers.) 2. dce_opnum:0-11. These numbers can be updated in Snort by supplying a CSV file with the new maximum Group numbers to use. Discover. Syntax sd_pattern:<count>. and Serial sections. There are a few built-in patterns to choose from: credit card The ”credit card” pattern matches 15. The SSNs are expected to have dashes between the Area. us social This pattern matches against 9-digit U.S. Example preprocessor config preprocessor sensitive_data: alert_threshold 25 \ mask_output \ ssn_file ssn_groups_Jan10. but the preprocessor will check matches against the list of currently allocated group numbers. Social Security numbers.ssn file A Social Security number is broken up into 3 sections: Area (3 digits). These numbers may have spaces. Credit card numbers matched this way have their check digits verified using the Luhn algorithm. us social nodashes This pattern matches U. pattern This is where the pattern of the PII gets specified.S. Snort recognizes Social Security numbers issued up through November 2009.255 pattern = any string Option Explanations count This dictates how many times a PII pattern must be matched for an alert to be generated. The count is tracked across all packets in a session. Group (2 digits).and 16-digit credit card numbers. A new rule option is provided by the preprocessor: sd_pattern This rule option specifies what type of PII a rule should detect. By default.csv Rule Options Snort rules are used to specify which PII the preprocessor should look for. or nothing in between groups. count = 1 . dashes. and Serial sections. On a monthly basis. Group. and American Express. and Serial (4 digits). Mastercard. Social Security numbers without dashes separating the Area. SSNs have no check digits. the Social Security Administration publishes a list of which Group numbers are in use for each Area. <pattern>. Group. email 95 . This covers Visa. 2.2. following the format (123)456-7890 Whole rule example: alert tcp $HOME_NET $HIGH_PORTS -> $EXTERNAL_NET $SMTP_PORTS \ (msg:"Credit Card numbers sent over email".This pattern matches against email addresses. There are also many new preprocessor and decoder rules to alert on or drop packets with ”abnormal” encodings. example: ”{3}” matches 3 digits. fields are cleared only if they are non-zero. ? makes the previous character or escape sequence optional. normalizations will only be enabled if the selected DAQ supports packet replacement and is operating in inline mode./configure --enable-normalizer The normalize preprocessor is activated via the conf as outlined below. then it is the definition of a custom PII pattern. Alerts on 5 U. Alerts when 2 social security numbers (with dashes) appear in a session. it is helpful to normalize packets to help minimize the chances of evasion. \\ matches a backslash \{. any normalization statements in the policy config are ignored. phone numbers. Note that in the following. Rules using sd pattern must use GID 138. use the following when configuring Snort: .S. sid:1000. \} matches { and } \? matches a question mark. To enable the normalizer.. Unlike PCRE. ! △NOTE\w in this rule option does NOT match underscores. If a policy is configured for inline test or passive mode. 96 . . metadata:service smtp. Custom PII types are defined using a limited regex-style syntax.credit_card. example: ” ?” matches an optional space.us_social. If the pattern specified is not one of the above built-in patterns. This behaves in a greedy manner. Trying to use other rule options with sd pattern will result in an error message.) Caveats sd pattern is not compatible with other rule options. sd_pattern: 5. \ sd_pattern:4. Examples sd_pattern: 2. rev:1.(\d{3})\d{3}-\d{4}.15 Normalizer When operating Snort in inline mode. gid:138. Other characters in the pattern will be matched literally. Also. Clear the differentiated services field (formerly TOS).IP4 Normalizations IP4 normalizations are enabled with: preprocessor normalize_ip4: [df]. TCP Normalizations TCP normalizations are enabled with: 97 . • TTL normalization if enabled (explained below). • rf reserved flag: clear this bit on incoming packets. • NOP all options octets. • NOP all options octets in hop-by-hop and destination options extension. and any explicitly allowed with the allow keyword. • Remove any data from RST packet. 12. • Clear the urgent pointer and the urgent flag if there is no payload. window scaling. • Clear the urgent pointer if the urgent flag is not set. 13 } <alt_checksum> ::= { 14. • Clear the reserved bits in the TCP header.preprocessor normalize_tcp: \ [ips] [urp] \ [ecn <ecn_type>]. Any segments that can’t be properly reassembled will be dropped. 10 } <conn_count> ::= { 11. 98 . • Clear the urgent flag if the urgent pointer is not set.255) Base normalizations enabled with ”preprocessor normalize tcp” include: • Remove data on SYN. • opts NOP all option bytes other than maximum segment size.. • ecn stream clear ECN flags if usage wasn’t negotiated. 15 } <md5> ::= { 19 } <num> ::= (3. 5 } <echo> ::= { 6. • Set the urgent pointer to the payload length if it is greater than the payload length. • ecn packet clear ECN flags on a per packet basis (regardless of negotiation). 7 } <partial_order> ::= { 9. You can allow options to pass by name or number. Optional normalizations include: • ips ensure consistency in retransmitted data (also forces reassembly policy to ”first”). • Trim data to window. • Clear any option padding bytes. timestamp. \ [opts [allow <allowed_opt>+]] <ecn_type> ::= stream | packet <allowed_opt> ::= \ sack | echo | partial_order | conn_count | alt_checksum | md5 | <num> <sack> ::= { 4. • Trim data to MSS. • urp urgent pointer: don’t adjust the urgent pointer if it is greater than payload length. Should also enable require 3whs. See doc/README. They also allow one to specify the rule type or action of a decoder or preprocessor event on a rule by rule basis. then if a packet is received with a TTL ¡ min ttl.• opts if timestamp is present but invalid. • opts clear TS ECR if ACK flag is not set.255) <new_ttl> ::= (<min_ttl>+1.3 Decoder and Preprocessor Rules Decoder and preprocessor rules allow one to enable and disable decoder and preprocessor events on a rule by rule basis. the drop cases only apply if Snort is running inline. Of course. config enable decode drops. For example. e. Note that this configuration item was deprecated in 2.decode for config options that control decoder events. block the packet. Decoder config options will still determine whether or not to generate decoder events.6: preprocessor stream5_tcp: min_ttl <#> By default min ttl = 1 (TTL normalization is disabled).conf or the decoder or preprocessor rule type is drop. the TTL will be set to new ttl. these options will take precedence over the event type of the rule. A packet will be dropped if either a decoder config drop option is in snort.g.. 99 . 2. Also note that if the decoder is configured to enable drops. or valid but not negotiated.255) If new ttl ¿ min ttl. NOP the timestamp octets. • opts trim payload length to MSS if longer. if config disable decode alerts is in snort.8. decoder events will not be generated regardless of whether or not there are corresponding rules for the event. • opts if timestamp was negotiated but not present. as follows: config min_ttl: <min_ttl> config new_ttl: <new_ttl> <min_ttl> ::= (1. • opts MSS and window scale options are NOP’d if SYN flag is not set. When TTL normalization is turned on the new ttl is set to 5 by default. 2. To change the rule type or action of a decoder/preprocessor rule.2 Reverting to original behavior If you have configured snort to use decoder and preprocessor rules. classtype:protocol-command-decode. and have the names decoder. just comment it with a # or remove the rule completely from the file (commenting is recommended). \ metadata: rule-type decode .map under etc directory is also updated with new decoder and preprocessor rules.3. rev: 1.gre and the various preprocessor READMEs for descriptions of the rules in decoder. gid: 116.conf. var PREPROC_RULE_PATH /path/to/preproc_rules . The gen-msg. The generator ids ( gid ) for different preprocessors and the decoder are as follows: 2.decode.rules To disable any rule. rev: 1. just replace alert with the desired rule type. README. These files are updated as new decoder and preprocessor events are added to Snort.1 Configuring The following options to configure will enable decoder and preprocessor rules: $ .) to drop ( msg: "DECODE_NOT_IPV4_DGRAM"../configure --enable-decoder-preprocessor-rules The decoder and preprocessor rules are located in the preproc rules/ directory in the top level source tree. gid: 116.. define the path to where the rules are located and uncomment the include lines in snort. classtype:protocol-command-decode. include $PREPROC_RULE_PATH/preprocessor.rules.conf that reference the rules files.rules respectively.rules include $PREPROC_RULE_PATH/decoder. sid: 1.conf will make Snort revert to the old behavior: config autogenerate_preprocessor_decoder_rules 100 . To enable these rules in snort. \ metadata: rule-type decode . Any one of the following rule types can be used: alert log pass drop sdrop reject For example one can change: alert ( msg: "DECODE_NOT_IPV4_DGRAM".rules and preprocessor. sid: 1.) to drop (as well as alert on) packets where the Ethernet protocol is IPv4 but version field in IPv4 header has a value other than 4.rules and preprocessor. See README. the following config option in snort.3. you also have to remove the decoder and preprocessor rules and any reference to them from snort.conf.1 Rate Filtering rate filter provides rate based attack prevention by allowing users to configure a new action to take for a specified time when a given rate is exceeded.4. This option applies to rules not specified and the default behavior is to alert. 2. • Event Filters You can use event filters to reduce the number of logged events for noisy rules. 101 . otherwise they will be loaded.4 Event Processing Snort provides a variety of mechanisms to tune event processing to suit your needs: • Detection Filters You can use detection filters to specify a threshold that must be exceeded before a rule generates an event. and the first applicable action is taken. 2. • Rate Filters You can use rate filters to change a rule action when the number or rate of events indicates a possible attack. in which case they are evaluated in the order they appear in the configuration file. This is covered in section 3. Multiple rate filters can be defined on the same rule.10.7. • Event Suppression You can completely suppress the logging of unintersting events. This can be tuned to significantly reduce false alarms. Option track by src | by dst | by rule Description rate is tracked either by source IP address. track by rule and apply to may not be used together. sig_id 1. \ count <c>. source and destination means client and server respectively.allow a maximum of 100 successful simultaneous connections from any one IP address. rate filter may be used to detect if the number of connections to a specific server exceed a specific count. for each unique destination IP address. \ track <by_src|by_dst|by_rule>. \ count 100. reject. revert to the original rule action after t seconds. track by rule and apply to may not be used together. 0 seconds means count is a total count instead of a specific rate.all are required except apply to. If t is 0. destination IP address. \ track by_src. and block further connection attempts from that IP address for 10 seconds: rate_filter \ gen_id 135. count c seconds s new action alert | drop | pass | log | sdrop | reject timeout t apply to <ip-list> Examples Example 1 . apply_to <ip-list>] The options are described in the table below . Note that events are generated during the timeout period. \ new_action drop. the maximum number of rule matches in s seconds before the rate filter limit to is exceeded. even if the rate falls below the configured limit. seconds <s>. c must be nonzero value. restrict the configuration to only to source or destination IP address (indicated by track parameter) determined by <ip-list>. drop. which is optional.Format Rate filters are used as standalone configurations (outside of a rule) and have the following format: rate_filter \ gen_id <gid>. \ new_action alert|drop|pass|log|sdrop|reject. and sdrop can be used only when snort is used in inline mode. then rule action is never reverted back. the time period over which count is accrued. sig_id <sid>. or they are aggregated at rule level. For example. timeout 10 Example 2 . or by rule. and block further connections from that IP address for 10 seconds: 102 . An event filter may be used to manage number of alerts after the rule action is enabled by rate filter.allow a maximum of 100 connection attempts per second from any one IP address. For rules related to Stream5 sessions. new action replaces rule action for t seconds. \ timeout <seconds> \ [. sdrop and reject are conditionally compiled with GIDS. This means the match statistics are maintained for each unique source IP address. seconds 1. 0 seconds only applies to internal rules (gen id 135) and other use will produce a fatal error by Snort. seconds <s> threshold is an alias for event filter. then the filter applies to all rules. event filters with sig id 0 are considered ”global” because they apply to all rules with the given gen id.rate_filter \ gen_id 135. (gen id 0. Snort will terminate with an error while reading the configuration information. \ track by_src. 103 . \ count <c>. Both formats are equivalent and support the options described below . then ignores any additional events during the time interval. \ new_action drop. Such events indicate a change of state that are significant to the user monitoring the network.4. If more than one event filter is Only one event applied to a specific gen id. \ type <limit|threshold|both>. sig_id <sid>. \ count <c>. the global filtering test is applied. If gen id is also 0. Standard filtering tests are applied first. Thresholds in a rule (deprecated) will override a global event filter. This can be tuned to significantly reduce false alarms. then ignores events for the rest of the time interval. There are 3 types of event filters: • limit Alerts on the 1st m events during the time interval. ! △NOTE can be used to suppress excessive rate filter alerts. \ track <by_src|by_dst>. timeout 10 2. however. seconds 0. • threshold Alerts every m times we see this event during the time interval. sig id != 0 is not allowed).all are required. Global event filters do not override what’s in a signature or a more specific stand-alone event filter. sig_id 2. threshold is deprecated and will not be supported in future releases. \ count 100. Format event_filter \ gen_id <gid>. ! △NOTE filter may be defined for a given gen id. if they do not block an event from being logged. \ track <by_src|by_dst>. the first new action event event filters of the timeout period is never suppressed. sig_id <sid>. seconds <s> threshold \ gen_id <gid>.2 Event Filtering Event filtering can be used to reduce the number of logged alerts for noisy rules by limiting the number of times a particular event is logged during a specified time interval. \ type <limit|threshold|both>. sig id pair. sig id. • both Alerts once per time interval after seeing m occurrences of the event. Type both alerts once per time interval after seeing m occurrences of the event. or destination IP address. seconds 60 104 \ . seconds 60 Limit logging to every 3rd event: event_filter \ gen_id 1. triggering each rule for each event generator: event_filter \ gen_id 0. time period over which count is accrued.Option gen id <gid> sig id <sid> type limit|threshold|both track by src|by dst count c seconds s Description Specify the generator ID of an associated rule. s must be nonzero value. gen id 0. type limit alerts on the 1st m events during the time interval. c must be nonzero value. number of rule matching in s seconds that will cause event filter limit to be exceeded. sig_id 1852. sig_id 1851. Type threshold alerts every m times we see this event during the time interval. track by_src. Examples Limit logging to 1 event per 60 seconds: event_filter \ gen_id 1. Ports or anything else are not tracked. Specify the signature ID of an associated rule. but only if we exceed 30 events in 60 seconds: event_filter \ gen_id 1. sig id 0 can be used to specify a ”global” threshold that applies to all rules. \ type threshold. sig_id 0. \ count 3. track by_src. seconds 60 Limit to logging 1 event per 60 seconds per IP. sig_id 1853. sig_id 0. track by_src. This means count is maintained for each unique source IP addresses. sig id 0 specifies a ”global” filter because it applies to all sig ids for the given gen id. \ count 1. rate is tracked either by source IP address. track by_src. then ignores events for the rest of the time interval. count 1. then ignores any additional events during the time interval. \ count 30. \ count 1. \ type both. or for each unique destination IP addresses. seconds 60 Limit to logging 1 event per 60 seconds per IP triggering each rule (rule gen id is 1): event_filter \ gen_id 1. track by_src. \ type limit. \ type limit. seconds 60 Limit logging to just 1 event per 60 seconds. \ type limit. sig_id 1852: Suppress this event from this IP: 105 . Users can also configure a memcap for threshold with a “config:” option: config event_filter: memcap <bytes> # this is deprecated: config threshold: memcap <bytes> 2. You may apply multiple suppressions to a non-zero SID. Specify the signature ID of an associated rule. sig_id <sid>. Suppression tests are performed prior to either standard or global thresholding tests. sig id 0 can be used to specify a ”global” threshold that applies to all rules.4. This allows a rule to be completely suppressed. This is optional. but if present. sig id 0 specifies a ”global” filter because it applies to all sig ids for the given gen id. \ track <by_src|by_dst>. If track is provided.3 Event Suppression Event suppression stops specified events from firing without removing the rule from the rule base. Examples Suppress this event completely: suppress gen_id 1. SIDs. and IP addresses via an IP list . gen id 0. or suppressed when the causative traffic is going to or coming from a specific IP or group of IP addresses. sig_id <sid>.map for details on gen ids. ip must be provided as well. Suppress by source IP address or destination IP address.Events in Snort are generated in the usual way. Restrict the suppression to only source or destination IP addresses (indicated by track parameter) determined by ¡list¿. Suppression uses an IP list to select specific networks and users for suppression. ip must be provided as well. You may also combine one event filter and several suppressions to the same non-zero SID. Read genmsg. \ suppress \ gen_id <gid>. event filters are handled as part of the output system. ip <ip-list> Option gen id <gid> sig id <sid> track by src|by dst ip <list> Description Specify the generator ID of an associated rule. Suppression are standalone configurations that reference generators. Format The suppress configuration has two forms: suppress \ gen_id <gid>. 1. 3. 2. such as max content length or event ordering using the event queue. and rules that have a longer content are ordered before rules with shorter contents.54 Suppress this event to this CIDR block: suppress gen_id 1.The highest priority (1 being the highest) events are ordered first. We currently have two different methods: • priority . content length. track by_src. For example. etc. ip 10. 1.Rules are ordered before decode or preprocessor alerts.0/24 2. track by_dst.suppress gen_id 1. sig_id 1852. only 8 events will be stored for a single packet or stream. if the event queue has a max size of 8. The method in which events are ordered does not affect rule types such as pass. order events This argument determines the way that the incoming events are ordered. The default value is 3.1. log This determines the number of events to log for a given packet or stream. but change event order: config event_queue: order_events priority Use the default event queue values but change the number of logged events: config event_queue: log 2 106 . • content length . max queue This determines the maximum size of the event queue. log.4.4 Event Logging Snort supports logging multiple events per packet/stream that are prioritized with different insertion methods. sig_id 1852. ip 10. The default value is 8. alert. You can’t log more than the max event number that was specified.1. txt 107 . sort by avg ticks. filename perf. save results to performance. based on total time config profile rules: print 100. sort total ticks • Print with default options. These files will be found in the logging directory. sort avg ticks • Print all rules. sorted by number of checks config profile rules: print all.5.txt with timestamp in filename config profile rules: print 20.txt append • Print top 20 rules.txt append • Print the top 10 rules. sort by avg ticks (default configuration if option is turned on) config profile rules • Print all rules.1 Rule Profiling Format config profile_rules: \ print [all | <num>].5 Performance Profiling Snort can provide statistics on rule and preprocessor performance. \ sort <sort_option> \ [.txt config profile rules: filename rules stats. and append to file rules stats.2. 2. When a file name is provided in profile rules or profile preprocs. To use this feature. a new file will be created each time Snort is run. If append is not specified. based on highest average time config profile rules: print 10. Each require only a simple config option to snort.conf and Snort will print statistics on the worst (or all) performers on exit. The filenames will have timestamps appended to them. save results to perf. the statistics will be saved in these files. sort checks • Print top 100 rules. you must build snort with the --enable-perfprofiling option to the configure script. By default. \ sort <sort_option> \ [. that most likely contains PCRE.5. sort total ticks 2.0 90054 45027. A high Avg/Check is a poor performing rule. it makes sense to leave it alone. this information will be printed to the console when Snort exits.0 92458 46229.0 Figure 2. The filenames will have timestamps appended to them. filename <filename> [append]] • <num> is the number of preprocessors to print 108 .. especially those with low SIDs. But.0 Avg/Match Avg/Nonmatch ========= ============ 385698.0 107822 53911.0 0.0 0. print 4.0 53911. the few options may or may not match. High Checks and low Avg/Check is usually an any->any rule with few rule options and no content. a new file will be created each time Snort is run.0 45027.1: Rule Profiling Example Output Output Snort will print a table much like the following at exit.2 Preprocessor Profiling Format config profile_preprocs: \ print [all | <num>]. We are looking at moving some of these into code. Quick to check. The Microsecs (or Ticks) column is important because that is the total time spent evaluating a given rule.0 46229.0 0. These files will be found in the logging directory. if that rule is causing alerts. You can use the ”filename” option in snort.conf to specify a file where this will be written. sort total_ticks The columns represent: • Number (rank) . Configuration line used to print the above table: config profile_rules: \ print 3.txt config profile preprocs: filename preprocs stats.. If ”append” is not specified. app layer header was correct.e. You can use the ”filename” option in snort. should ALWAYS match Checks.conf to specify a file where this will be written. subroutines within preprocessors. sort by avg ticks.txt append • Print the top 10 preprocessors. Because of task swapping. sort avg ticks • Print all preprocessors. sorted by number of checks config profile preprocs: Output Snort will print a table much like the following at exit.The number is indented for each layer. sort checks . this identifies the percent of the caller’s ticks that is spent for this subtask.For non layer 0 preprocessors. based on highest average time config profile preprocs: print 10. The filenames will have timestamps appended to them. this information will be printed to the console when Snort exits. a new file will be created each time Snort is run.When printing a specific number of preprocessors all subtasks info for a particular preprocessor is printed for each layer 0 preprocessor stat. Layer 1 preprocessors are listed under their respective caller (and sorted similarly). It does give a reasonable indication of how much relative time is spent within each subtask. i. and append to file preprocs stats. These files will be found in the logging directory. non-instrumented code. sort by avg ticks (default configuration if option is turned on) config profile preprocs • Print all preprocessors. By default. etc) • Exits (number of corresponding exits – just to verify code is instrumented correctly. and other factors. ports matched. 109 print all. unless an exception was trapped) • CPU Ticks • Avg Ticks per Check • Percent of caller . the Pct of Caller field will not add up to 100% of the caller’s time. • Preprocessor Name • Layer . 24 0.07 17.01 0.06 0.86 4.78 2.85 84.87 71.14 0.08 0.10 1.12 0.22 15657.01 0.00 65.40 28.00 0.59 0.00 0.00 0.29 2.00 0.07 6.34 0.16 1.32 0.92 3.84 0.23 21.73 1.00 0.62 17.02 0.02 47.73 1.00 4.12 12.00 0.01 19.57 1.51 2.03 8.30 0.06 0.83 0.06 0.70 0.58 0.59 19.00 0.10 0.00 0.00 0.70 0.00.62 3.77 0.16 0.94 3.81 39.11 0.16 0.00 0.33 8.41 0.01 0.80 0.15 0.00 0.17 18.53 21.37 0.00 0.2: Preprocessor Profiling Example Output 110 .00 0.02 0.17 21.77 39.06 0.89 2.00 0.07 0.02 11.04 0.14 25.66 0.20 0.06 3.00 0.70 0.46 99.03 0.79 0.39 13.01 0.27 0.00 0.68 38.84 0.16 0.78 1.17 21.72 1.00 Figure 2.81 93.56 0.53 21.20 19.88 44.00 0.01 0.34 1.04 0.81 6.00 0.56 39.51 6.01 0.25 77.09 0.94 99.08 9.20 34.21 1.04 0.02 0.43 39.14 307.32 0.89 0.77 0.00 0.37 0.87 0.06 0.65 1.91 15.12 0.34 0.20 47.00 0. \ fastpath-expensive-packets.2..3 Packet Performance Monitoring (PPM) PPM provides thresholding mechanisms that can be used to provide a basic level of latency control for snort. sample output. or together in just one config ppm statement.5. \ suspend-timeout <seconds>. \ threshold count. \ rule-log [log] [alert] Packets and rules can be configured separately. The following sections describe configuration. you must build with the –enable-ppm or the –enable-sourcefire option to configure. and some implementation details worth noting. so one or both or neither may be enabled. PPM is configured as follows: # Packet configuration: config ppm: max-pkt-time <micro-secs>. \ suspend-expensive-rules. \ debug-pkts # Rule configuration: config ppm: max-rule-time <micro-secs>. as above. . Both rules and packets can be checked for latency. To use PPM. \ pkt-log. Packet and rule monitoring is independent. Example 2: The following suspends rules and aborts packet inspection. then no action is taken other than to increment the count of the number of packets that should be fastpath’d or the rules that should be suspended. These rules were used to generate the sample output that follows. 112 . A summary of this information is printed out when snort exits. 0438 usecs. fastpath-expensive-packets..633125 usecs Rule Performance Summary: 113 .394 usecs Process-EndPkt[63] PPM: Process-BeginPkt[64] caplen=60 PPM: Pkt[64] Used= 8.. \ pkt-log.. debug-pkt config ppm: \ max-rule-time 50. 0 rules.config ppm: \ max-pkt-time 50... Process-BeginPkt[63] caplen=60 Pkt[63] Used= 8. Sample Snort Exit Output Packet Performance Summary: max packet time : 50 usecs packet events : 1 avg pkt time : 0. 1 nc-rules tested. \ suspend-timeout 300.21764 usecs PPM: Process-EndPkt[64] .3659 usecs PPM: Process-EndPkt[62] PPM: PPM: PPM: PPM: Pkt-Event Pkt[63] used=56. packet fastpathed.15385 usecs PPM: Process-EndPkt[61] PPM: Process-BeginPkt[62] caplen=342 PPM: Pkt[62] Used= 65. suspend-expensive-rules. The output modules are run when the alert or logging subsystems of Snort are called. As with the standard logging and alerting systems.6 Output Modules Output modules are new as of version 1. it is recommended that you tune your thresholding to operate optimally when your system is under load. Latency control is not enforced after each preprocessor. not just the processor time the Snort application receives. Hence the reason this is considered a best effort approach.2675 usecs • Enforcement of packet and rule processing times is done after processing each rule. giving users greater flexibility in logging alerts. They allow Snort to be much more flexible in the formatting and presentation of output to its users. This was a conscious design decision because when a system is loaded. the latency for a packet is based on the total system time. alert) are specified.1 alert syslog This module sends alerts to the syslog facility (much like the -s command line switch). Therefore. Therefore this implementation cannot implement a precise latency guarantee with strict timing guarantees. Output modules are loaded at runtime by specifying the output keyword in the config file: output <name>: <options> output alert_syslog: log_auth log_alert 2. Due to the granularity of the timing measurements any individual packet may exceed the user specified packet or rule processing time limit.6. Multiple output plugins may be specified in the Snort configuration file.6. they are stacked and called in sequence when an event occurs. • This implementation is software based and does not use an interrupt driven timing mechanism and is therefore subject to the granularity of the software based timing tests. after the preprocessors and detection engine. This module also allows the user to specify the logging facility and priority within the Snort config file. not processor usage by Snort. • Time checks are made based on the total system time. The format of the directives in the config file is very similar to that of the preprocessors. Available Keywords Facilities • log auth • log authpriv • log daemon 114 . output plugins send their data to /var/log/snort by default or to a user directed directory (using the -l command line switch).max rule time rule events avg nc-rule time Implementation Details : 50 usecs : 0 : 0. 2. • Since this implementation depends on hardware based high performance frequency counters. latency thresholding is presently only available on Intel and PPC platforms. When multiple plugins of the same type (log. The default host is 127.] \ <facility> <priority> <options> 115 . output alert_syslog: \ [host=<hostname[:<port>].0.0. Example output alert_fast: alert. This output method is discouraged for all but the lightest traffic situations. Inside the logging directory.1.13 for more information.full 116 . By default. The default name is ¡logdir¿/alert.6. • limit: an optional limit on file size which defaults to 128 MB. The creation of these files slows Snort down considerably. The name may include an absolute or relative path. These files will be decoded packet dumps of the packets that triggered the alerts.2 alert fast This will print Snort alerts in a quick one-line format to a specified output file. The default name is ¡logdir¿/alert.1:514.6. The minimum is 1 KB. It is a faster alerting method than full alerts because it doesn’t need to print all of the packet headers to the output file and because it logs to only 1 file.Example output alert_syslog: host=10. Format output alert_full: [<filename> [<limit>]] <limit> ::= <number>[(’G’|’M’|K’)] • filename: the name of the log file. <facility> <priority> <options> 2. See 2.1. You may specify ”stdout” for terminal output. Format output alert_fast: [<filename> ["packet"] [<limit>]] <limit> ::= <number>[(’G’|’M’|K’)] • filename: the name of the log file. • limit: an optional limit on file size which defaults to 128 MB. See 2. The alerts will be written in the default logging directory (/var/log/snort) or in the logging directory specified at the command line. • packet: this option will cause multiline entries with full packet headers to be logged.6.3 alert full This will print Snort alert messages with full packet headers. only brief single-line entries are logged. The name may include an absolute or relative path.fast 2.6. You may specify ”stdout” for terminal output.13 for more information. The minimum is 1 KB. a directory will be created per IP. Example output alert_full: alert. Parameters are specified with the format parameter = argument. This is currently an experimental interface. see Figure 2. More information on installing and configuring this module can be found on the [91]incident.org web page. The default name is ¡logdir¿/snort.6. port .Host to connect to.Database name 117 .6. the aggregate size is used to test the rollover condition.4 alert unixsock Sets up a UNIX domain socket and sends alert reports to it. If a non-zero-length string is specified. Without a host name. The arguments to this plugin are the name of the database to be logged to and a parameter list. External programs/processes can listen in on this socket and receive Snort alert and packet data in real time.log. A UNIX timestamp is appended to the filename.6. TCP/IP communication is used.3 for example usage. <database type>. <parameter list> The following parameters are available: host . or socket filename extension for UNIX-domain connections.6.13 for more information.2. The name may include an absolute or relative path.log 2. • limit: an optional limit on file size which defaults to 128 MB. dbname .Port number to connect to at the server host.6 database This module from Jed Pickel sends Snort data to a variety of SQL databases. Example output log_tcpdump: snort. it will connect using a local UNIX domain socket. Format database: <log | alert>. When a sequence of packets is to be logged.5 log tcpdump The log tcpdump module logs packets to a tcpdump-formatted file. See 2. Format alert_unixsock Example output alert_unixsock. See section 3. The following fields are logged: timestamp. 118 .not readable requires post processing ascii . but this is still the best choice for some applications. one will be generated automatically encoding .’. So i leave the encoding option to you. If you set the type to log. dbname=snort user=snort host=localhost password=xyz Figure 2. destination port. This is the only option where you will actually lose data.Password used if the database demands password authentication sensor name .impossible without post processing Human readability . and protocol) Furthermore. there is no one simple and portable way to store it in a database. source ip. postgresql.3x the size of the binary Searchability . Setting the type to log attaches the database logging functionality to the log facility within the program. mysql.not readable unless you are a true geek. If you do not specify a name.Represent binary data as an ASCII string. You severely limit the potential of some analysis applications if you choose this option. Non-ASCII Data is represented as a ‘.Specify your own name for this Snort sensor. There are two logging types available.>) Searchability .2x the size of the binary Searchability .Represent binary data as a hex string.<. then data for IP and TCP options will still be represented as hex because it does not make any sense for that data to be ASCII. signature.How much detailed data do you want to store? The options are: full (default) . There are five database types available in the current version of the plugin.output database: \ log. Setting the type to alert attaches the plugin to the alert output chain within the program.very good detail . requires post processing base64 . the plugin will be called on the log output chain.Database username for authentication password . Each has its own advantages and disadvantages: hex (default) .5 for more details. Storage requirements . there is a logging method and database type that must be defined.slightly larger than the binary because some characters are escaped (&. Blobs are not used because they are not portable across databases.7.Represent binary data as a base64 string. Storage requirements . and odbc.3: Database Output Plugin Configuration user . These are mssql. destination ip. You can choose from the following options.∼1.Log only a minimum amount of data. log and alert.Because the packet payload and option data is binary. mysql. If you choose this option. Storage requirements .Log all details of a packet that caused an alert (including IP/TCP options and the payload) fast . source port. oracle.very good Human readability . tcp flags.very good for searching for a text string impossible if you want to search for binary human readability . Set the type to match the database you are using. ! △NOTE The database output plugin does not have the ability to handle alerts that are generated by using the tag keyword. The output fields and their order may be customized. You may specify ”stdout” for terminal output. The name may include an absolute or relative path. <limit> ::= <number>[(’G’|’M’|K’)] • filename: the name of the log file.6.csv. Format output alert_csv: [<filename> [<format> [<limit>]]] <format> ::= "default"|<list> <list> ::= <field>(.13 for more information.. The minimum is 1 KB.6. The default name is ¡logdir¿/alert. See 2.2.<field>)* <field> ::= "dst"|"src"|"ttl" . If the formatting option is ”default”. • format: The list of formatting options is below. –. 119 . the output is in the order of the formatting options listed.7 csv The csv output plugin allows alert data to be written in a format easily importable to a database.. 8 unified The unified output plugin is designed to be the fastest possible method of logging Snort events.6. Both file types are written in a binary format described in spo unified. The log file contains the detailed packet information (a packet dump with the associated event ID). Packet logging includes a capture of the entire packet and is specified with log unified2.log. packet logging. simply specify unified2. <limit <file size limit in MB>] output log_unified: <base file name> [. alert logging. The unified output plugin logs events in binary format. It has the same performance characteristics. message id).csv default output alert_csv: /var/log/alert.alert. allowing another programs to handle complex logging mechanisms that would otherwise diminish the performance of Snort. but a slightly different logging format. To include both logging styles in a single. nostamp] [. The alert file contains the high-level details of an event (eg: IPs. protocol. <limit <size in MB>] [.9 unified 2 The unified2 output plugin is a replacement for the unified output plugin. as the unified output plugin creates two different files. or true unified logging. Use option mpls event types to enable this. msg 2. See section 2. limit 128 output log_unified: snort. Likewise. ! △NOTE Files have the file creation time (in Unix Epoch format) appended to each file when it is created. ! △NOTE By default. unified file..csv timestamp.8 on unified logging for more information. If option mpls event types is not used. then MPLS labels will be not be included in unified2 events.6. and a log file. <limit <file size limit in MB>] Example output alert_unified: snort. an alert file. Format output alert_unified2: \ filename <base filename> [. port. Format output alert_unified: <base file name> [. When MPLS support is turned on. limit 128 2. The name unified is a misnomer. MPLS labels can be included in unified2 events. mpls_event_types] 120 .6.Example output alert_csv: /var/log/alert.h. Unified2 can work in one of three modes.. 2. Snort will inspect packets for a connection to 192.3 Attribute Table Example In the example above. FTP. etc). TCP port 22 is ssh (running Open SSH). for a given host entry. Below is a list of the common services used by Snort’s application layer preprocessors and Snort rules (see below).2). and any client attributes are ignored.168.6 is described. On that host. Conversely. 2. The application and version for a given service attribute. and TCP port 2300 is telnet.7.234. and protocol (http.2. That field is not currently used by Snort. if. HTTP Inspect is configured to inspect traffic on port 2300.168.168.234 port 2300 because it is identified as telnet.. The IP stack fragmentation and stream reassembly is mimicked by the ”linux” configuration (see sections 2.1.1 and 2.<VERSION> <ATTRIBUTE_VALUE>6. etc) make use of the SERVICE information for connections destined to that host on that port. HTTP Inspect will NOT process the packets on a connection to 192.8.234 port 2300 as telnet. ssh. etc) are used. a host running Red Hat 2. udp. the stream and IP frag information are both used. Of the service attributes. The confidence metric may be used to indicate the validity of a given service or client application and its respective elements. They will be used in a future release. SMTP. but may be in future releases. This host has an IP address of 192. Telnet. for example. • Application Layer Preprocessors The application layer preprocessors (HTTP.1.1. For example. port. .0</ATTRIBUTE_VALUE> <CONFIDENCE>89</CONFIDENCE> </VERSION> </APPLICATION> </CLIENT> </CLIENTS> </HOST> </ATTRIBUTE_TABLE> </SNORT_ATTRIBUTES> ! △NOTE With Snort 2.1. even if the telnet portion of the FTP/Telnet preprocessor is only configured to inspect port 23. A DTD for verification of the Host Attribute Table XML file is provided with the snort packages. ) • No Alert: Rule Has No Service Metadata. alert tcp any any -> any 23 (msg:"Port 23 traffic".234 port 2300 because the port matches. sid:10000001. flow:to_server.) • No Alert: Rule Has Service Metadata.234 port 2300 because it is identified as telnet. Port Matches The following rule will NOT be inspected and NOT alert on traffic to host 192.234 port 2300 because the port does not match. • Alert: Rule Has Service Metadata. Port Does Not Match The following rule will NOT be inspected and NOT alert on traffic to host 192.168. alert tcp any any -> any 2300 (msg:"Port 2300 traffic". Connection Service Matches One of them The following rule will be inspected and alert on traffic to host 192.established.168. but the service is ssh. flow:to_server. flow:to_server. If there are rules that use the service and other rules that do not but the port matches. sid:10000005. metadata: service telnet. Snort uses the service rather than the port. alert tcp any any -> any 23 (msg:"Telnet traffic".established.established. alert tcp any any -> any 2300 (msg:"SSH traffic".Attribute Table Affect on rules Similar to the application layer preprocessors. flow:to_server. metadata: service telnet. flow:to_server.234 port 2300 because it is identified as telnet.1. metadata: service telnet.168. alert tcp any any -> any 2300 (msg:"Port 2300 traffic". Port Matches The following rule will be inspected and alert on traffic to host 192. sid:10000002. metadata: service ssh.1.1. sid:10000003.1.) alert tcp any any -> any 2300 (msg:"Port 2300 traffic".) • Alert: Rule Has No Service Metadata.168. flow:to_server. Connection Service Does Not Match. The following few scenarios identify whether a rule will be inspected or not.) 126 .established.established.234 port 2300 because that traffic is identified as telnet.1. sid:10000004. rules configured for specific ports that have a service metadata will be processed based on the service identified by the attribute table.168. Snort will ONLY inspect the rules that have the service that matches the connection.established.established. alert tcp any any -> any 23 (msg:"Telnet traffic". Connection Service Matches The following rule will be inspected and alert on traffic to host 192. sid:10000007.) • Alert: Rule Has Multiple Service Metadata.1. Packet has service + other rules with service The first rule will NOT be inspected and NOT alert on traffic to host 192. When both service metadata is present in the rule and in the connection.168. service smtp. sid:10000006. flow:to_server.234 port 2300 because the service is identified as telnet and there are other rules with that service.) • Alert: Rule Has No Service Metadata. (Same effect as --dynamic-detection-lib or --dynamic-detection-lib-dir options). ! △NOTE To disable use of dynamic modules). Or.8.1 Format <directive> <parameters> 2.6. followed by the full or relative path to a directory of preprocessor shared libraries. Tells snort to load the dynamic engine shared library (if file is used) or all dynamic engine shared libraries (if directory is used). Snort must be configured with the --disable-dynamicplugin flag. Specify file. (Same effect as --dynamic-engine-lib or --dynamic-preprocessor-lib-dir options). followed by the full or relative path to the shared library. 2. 127 . Specify file. however. the main Snort packet processing thread will swap in the new configuration to use and will continue processing under the new configuration. (Same effect as --dynamic-preprocessor-lib or --dynamic-preprocessor-lib-dir options). specify directory. followed by the full or relative path to the shared library.conf or via command-line options. Specify file. specify directory.8. existing session data will continue to use the configuration under which they were created in order to continue with proper state for that session. specify directory. See chapter 4 for more information on dynamic preprocessor libraries. use the new configuration. Or.8 Dynamic Modules Dynamically loadable modules were introduced with Snort 2. When a swappable configuration object is ready for use. Or.2. See chapter 4 for more information on dynamic detection rules libraries. followed by the full or relative path to a directory of detection rules shared libraries. followed by the full or relative path to a directory of preprocessor shared libraries. followed by the full or relative path to the shared library. Note that for some preprocessors. They can be loaded via directives in snort. See chapter 4 for more information on dynamic engine libraries. dynamicengine [ file <shared library path> | directory <directory of shared libraries> ] dynamicdetection [ file <shared library path> | directory <directory of shared libraries> ] 2. All newly created sessions will. A separate thread will parse and create a swappable configuration object while the main Snort packet processing thread continues inspecting traffic under the current configuration. To disable this behavior and have Snort exit instead of restart.e. ! △NOTE is not currently supported in Windows. etc.conf (the file passed to the -c option on the command line). so you should test your new configuration An invalid before issuing a reload.g. If reload support ! △NOTEconfiguration will still result in Snort fatal erroring. $ snort -c snort. This functionality 2.9. Changes to the following options are not reloadable: attribute_table config alertfile config asn1 config chroot 128 .). e. i. Then. add --enable-reload to configure when compiling. • Any changes to output will require a restart.2 Reloading a configuration First modify your snort. send Snort a SIGHUP signal. This option is enabled by default and the behavior is for Snort to restart if any nonreloadable options are added/modified/removed.1 Enabling support To enable support for reloading a configuration. Non-reloadable configuration options of note: • Adding/modifying/removing shared objects via dynamicdetection.2. Reloadable configuration options of note: • Adding/modifying/removing text rules and variables are reloadable. to initiate a reload.9. startup memory allocations. There is also an ancillary option that determines how Snort should behave if any non-reloadable options are changed (see section 2. any new/modified/removed shared objects will require a restart.3 below).g. • Adding/modifying/removing preprocessor configurations are reloadable (except as noted below). e. $ kill -SIGHUP <snort pid> ! △NOTE is not enabled.9.3 Non-reloadable configuration options There are a number of option changes that are currently non-reloadable because they require changes to output.9. dynamicengine and dynamicpreprocessor are not reloadable. Snort will restart (as it always has) upon receipt of a SIGHUP.conf -T 2. . Each configuration can have different preprocessor settings and detection rules.10.Refers to the comma seperated list of vlandIds and vlanId ranges. they are included as valid in terms of configuring Snort. the default values of the option (not the default configuration values) take effect. Negative vland Ids and alphanumeric are not supported. Subnets can be CIDR blocks for IPV6 or IPv4. policy_id policy_mode policy_version The following config options are specific to each configuration.conf .1 Creating Multiple Configurations Default configuration for snort is specified using the existing -c option.e.Refers to the absolute or relative path to the snort. vlanIdList . Each unique snort configuration file will create a new configuration instance within snort. Even though 2.Refers to ip subnets. ipList . ! △NOTE Vlan Ids 0 and 4095 are reserved. 2.10. VLANs/Subnets not bound to any specific configuration will use the default configuration. ! △NOTE can not be used in the same line. their value applies to all other configurations. Valid vlanId is any number in 0-4095 range. config config config config config checksum_drop disable_decode_alerts disable_decode_drops disable_ipopt_alerts disable_ipopt_drops 130 . Configurations can be applied based on either Vlans or Vlan and Subnets Subnets not both. using the following configuration line: config binding: <path_to_snort. The format for ranges is two vlanId separated by> vlan <vlanIdList> config binding: <path_to_snort.conf for specific configuration. A maximum of 512 individual IPv4 or IPv6 addresses or CIDRs can be specified.2.conf> net <ipList> path to snort.. A default configuration binds multiple vlans or networks to non-default configurations. Spaces are allowed within ranges. If the rules in a configuration use variables. This is required as some mandatory preprocessor configuration options are processed only in default configuration. If a rule is not specified in a configuration then the rule will never raise an event for the configuration. The options control total memory usage for a preprocessor across all policies. 131 .Refers to a 16-bit unsigned value. Preprocessors Preprocessors configurations can be defined within each vlan or subnet specific configuration.. payload detection options. Events and Output An unique policy id can be assigned by user.. to each configuration using the following config line: config policy_id: <id> id . Parts of the rule header can be specified differently across configurations. ”portvar” and ”ipvar” are specific to configurations. and post-detection options. ! △NOTE If no policy id is specified. those variables must be defined in that configuration. non-payload detection options. Variables Variables defined using ”var”. A rule shares all parts of the rule options. are processed only in default policy. through specific limit on memory usage or number of instances. These options are ignored in non-default policies without raising an error. including the general options. snort assigns 0 (zero) value to the configuration. A preprocessor must be configured in default configuration before it can be configured in non-default configuration. vlan event types .1 Enabling Active Response This enables active responses (snort will send TCP RST or ICMP unreachable/port) when dropping a session. including: • a single mechanism for all responses • fully encoded reset or icmp unreachable packets • updated flexible response rule option • updated react rule option • added block and sblock rule actions These changes are outlined below.Refers to the absolute or relative filename. 132 . vlan_event_types (true unified logging) filename . In the end. then the innermost VLANID is used to find bound configuration. . snort will use unified2 event type 104 and 105 for IPv4 and IPv6 respectively. If VLANID is present.3 How Configuration is applied? Snort assigns every incoming packet to a unique configuration based on the following criteria. If the bound configuration is the default configuration.output alert_unified2: vlan_event_types (alert logging only) output unified2: filename <filename>. snort will use the first configuration in the order of definition. \ min_response_seconds <min_sec> <max_rsp> ::= (0.25) <min_sec> ::= (1.. then destination IP address is searched to the most specific subnet that is bound to a non-default configuration.9 includes a number of changes to better handle inline operation. this can lead to conflicts between configurations if source address is bound to one configuration and destination address is bound to another.11 Active Response Snort 2../configure --enable-active-response / -DACTIVE_RESPONSE preprocessor stream5_global: \ max_active_responses <max_rsp>. default configuration is used if no other matching configuration is found.300) Active responses will be encoded based on the triggering packet. For addressed based configuration binding. Each event 2. 2. that can be applied to the packet. ! △NOTElogged will have the vlanId from the packet if vlan headers are present otherwise 0 will be used. TTL will be set to the value captured at session pickup. 2.11. In this case.When this option is set.10. The packet is assigned non-default configuration if found otherwise the check is repeated using source IP address. /configure --enable-flexresp3 / -DENABLE_RESPOND -DENABLE_RESPONSE3 alert tcp any any -> any 80 (content:"a".11.. and will be deleted in a future release: .) * resp t includes all flexresp and flexresp2 options: <resp_t> ::= \ rst_snd | rst_rcv | rst_all | \ reset_source | reset_dest | reset_both | icmp_net | \ icmp_host | icmp_port | icmp_all 2. * Flexresp is deleted.3 Flexresp Flexresp and flexresp2 are replaced with flexresp3. In inline mode the reset is put straight into the stream in lieu of the triggering packet so strafing is not necessary. At most 1 ICMP unreachable is sent. sid:1. non-functional.20) 2. . ./configure --enable-flexresp / -DENABLE_RESPOND -DENABLE_RESPONSE config flexresp: attempts 1 * Flexresp2 is deleted..11. these features are no longer avaliable: ./configure --enable-react / -DENABLE_REACT The page to be sent can be read from a file: 133 . This is built with: .. if and only if attempts ¿ 0. This sequence ”strafing” is really only useful in passive mode. resp:<resp_t>. Each attempt (sent in rapid succession) has a different sequence number. TCP data (sent for react) is multiplied similarly.2.4 React react is a rule option keyword that enables sending an HTML page on a session and then resetting it.11./configure --enable-active-response config response: attempts <att> <att> ::= (1. You could craft a binary payload of arbitrary content.config react: <block. charset=utf-8\r\n" "\r\n" "<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.5 Rule Actions The block and sblock actions have been introduced as synonyms for drop and sdrop to help avoid confusion between packets dropped due to load (eg lack of available buffers for incoming packets) and packets blocked due to Snort’s analysis.dtd\">\r\n" \ "<html xmlns=\". 2.html> or else the default is used: <default_page> ::= \ "HTTP/1.w3.1 403 Forbidden\r\n" "Connection: close\r\n" "Content-Type: text/html.".1//EN\"\r\n" \ " \". When the rule is configured.) <react_opts> ::= [msg] [. msg:"Unauthorized Access Prohibited!". The deprecated options are ignored. charset=UTF-8\" />\r\n" \ "<title>Access Denied</title>\r\n" \ "</head>\r\n" \ "<body>\r\n" \ "<h1>Access Denied</h1>\r\n" \ "<p>%s</p>\r\n" \ "</body>\r\n" \ "</html>\r\n". a resp option can be used instead. If no page should be sent. [proxy <port#>] The original version sent the web page to one end of the session only if the other end of the session was port 80 or the optional proxy port. The new version always sends the page to the client. the page is loaded and the selected message.w3.org/TR/xhtml11/DTD/xhtml11. including any HTTP headers. the response isn’t strictly limited to HTTP. which defaults to: <default_msg> ::= \ "You are attempting to access a forbidden site.org/1999/xhtml\" xml:lang=\"en\">\r\n" \ "<head>\r\n" \ "<meta http-equiv=\"Content-Type\" content=\"text/html. <dep_opts>] These options are deprecated: <dep_opts> ::= [block|warn]. In fact. This is an example rule: drop tcp any any -> any $HTTP_PORTS ( \ content: "d". 134 . sid:4.11. Note that the file must contain the entire response.<br />" \ "Consult your system administrator for details. \ react: <react_opts>. The rule header contains the rule’s action. When taken together.168. for that matter). This was required in versions prior to 1.1 Rule Actions The rule header contains the information that defines the who. At the same time. 3. The rule option section contains alert messages and information on which parts of the packet should be inspected to determine if the rule action should be taken. Figure 3. source and destination IP addresses and netmasks. as well as what to do in the event that a packet with all the attributes indicated in the rule should show up.1: Sample Snort Rule 135 . rules may span multiple lines by adding a backslash \ to the end of the line. In current versions of Snort.1. Most Snort rules are written in a single line. msg:"mountd access". Snort rules are divided into two logical sections.2. ! △NOTE Note that the rule options section is not specifically required by any rule. The text up to the first parenthesis is the rule header and the section enclosed in parenthesis contains the rule options.) Figure 3. and the source and destination ports information. the rule header and the rule options. the elements can be considered to form a logical AND statement.1 illustrates a sample Snort rule. There are a number of simple guidelines to remember when developing Snort rules that will help safeguard your sanity.8.Chapter 3 Writing Snort Rules 3. and what of a packet. All of the elements in that make up a rule must be true for the indicated rule action to be taken. The words before the colons in the rule options section are called option keywords. they are just used for the sake of making tighter definitions of packets to collect or alert on (or drop. protocol. lightweight rules description language that is flexible and quite powerful. the various rules in a Snort rules library file can be considered to form a large logical OR statement.0/24 111 \ (content:"|00 01 86 a5|".1 The Basics Snort uses a simple. where.2 Rules Headers 3. The first item in a rule is the rule alert tcp any any -> 192. remain idle until activated by an activate rule . You can also define your own rule types and associate one or more output plugins with them. In the future there may be more. OSPF. /16 a Class B network. and /32 indicates a specific machine address. the address/CIDR combination 192. then act as a log rule 6. alert. RIP. The rule action tells Snort what to do when it finds a packet that matches the rule criteria. For example. pass. Any rule that used this designation for. You can then use the rule types as actions in Snort rules. In addition. 8.ignore the packet 4. There are 5 available default actions in Snort. and IP. reject. drop .168.168. This example will create a type that will log to just tcpdump: ruletype suspicious { type log output log_tcpdump: suspicious. 3. the destination address would match on any address in that range.2. say. 136 . pass .generate an alert using the selected alert method. user=snort dbname=snort host=localhost } 3.action.3 IP Addresses The next portion of the rule header deals with the IP address and port information for a given rule. mysql.1 to 192. log. alert . ICMP. etc. The keyword any may be used to define any address.168.block and log the packet 7. and then log the packet 2. reject .1. 1. log it. UDP. dynamic . such as ARP. and dynamic. sdrop .0/24 would signify the block of addresses from 192. A CIDR block mask of /24 indicates a Class C network. Snort does not have a mechanism to provide host name lookup for the IP address fields in the config file. There are four protocols that Snort currently analyzes for suspicious behavior – TCP. The CIDR designations give us a nice short-hand way to designate large address spaces with just a few characters. you have additional options which include drop. activate.2. The CIDR block indicates the netmask that should be applied to the rule’s address and any incoming packets that are tested against the rule. activate .alert and then turn on another dynamic rule 5. IPX. The addresses are formed by a straight numeric IP address and a CIDR[3] block.1.block the packet.log } This example will create a rule type that will log to syslog and a MySQL database: ruletype redalert { type alert output alert_syslog: LOG_AUTH LOG_ALERT output database: log. GRE.log the packet 3.block the packet but do not log it. if you are running Snort in inline mode.255. IGRP.1. and then send a TCP reset if the protocol is TCP or an ICMP port unreachable message if the protocol is UDP. and sdrop.2 Protocols The next field in a rule is the protocol. log . \ msg:"external mountd access". An IP list is specified by enclosing a comma separated list of IP addresses and CIDR blocks within square brackets. etc.1. This operator tells Snort to match any IP address except the one indicated by the listed IP address. which would translate to none. and the destination address was set to match on the 192. if for some twisted reason you wanted to log everything except the X Windows ports. the IP list may not include spaces between the addresses.4 Port Numbers Port numbers may be specified in a number of ways. meaning literally any port.0/24] any -> \ [192.1.0/24 :6000 log tcp traffic from any port going to ports less than or equal to 6000 log tcp any :1024 -> 192.0/24.10. an easy modification to the initial example is to make it alert on any traffic that originates outside of the local net with the negation operator as shown in Figure 3.1.168. static port definitions.. how Zen. Static ports are indicated by a single port number. There is an operator that can be applied to IP addresses. the source IP address was set to match for any computer talking. For example. Port negation is indicated by using the negation operator !. the negation operator. For example. such as in Figure 3. This rule’s IP addresses indicate any tcp packet with a source IP address not originating from the internal network and a destination address on the internal network.1.1. or direction. or 80 for http.3 for an example of an IP list in action. See Figure 3.168.3: IP Address Lists In Figure 3.168.0/24 111 \ (content:"|00 01 86 a5|".2: Example IP Address Negation Rule alert tcp ![192. The negation operator is indicated with a !. 3.0/24 500: log tcp traffic from privileged ports less than or equal to 1024 going to ports greater than or equal to 500 Figure 3.alert tcp !192.). ranges. and by negation. Port ranges are indicated with the range operator :. For the time being.4.2.5 The Direction Operator The direction operator -> indicates the orientation.1.2.168.1. including any ports.0/24 1:1024 log udp traffic coming from any port and destination ports ranging from 1 to 1024 log tcp any any -> 192.168.) Figure 3.0/24] 111 (content:"|00 01 86 a5|".0 Class C network. 3. msg:"external mountd access".1.0/24.1. of the traffic that the rule applies to.168.10.1. The negation operator may be applied against any of the other rule types (except any.1. The IP address and port numbers on the left side of the direction operator is considered to be the traffic coming from the source log udp any any -> 192.1. you could do something like the rule in Figure 3.168. such as 111 for portmapper. The range operator may be applied in a number of ways to take on different meanings. Any ports are a wildcard value..168.4: Port Range Examples 137 .2.0/24 any -> 192.) Figure 3. 23 for telnet.1.5. You may also specify lists of IP addresses. Activate rules act just like alert rules.1.6. All Snort rule options are separated from each other using the semicolon (. This tells Snort to consider the address/port pairs in either the source or destination orientation. count.2.8. Also.168. There is also a bidirectional operator. the direction operator did not have proper error checking and many people used an invalid token. 3. You can now have one rule activate another when it’s action is performed for a set number of packets.0/24 any <> 192. but they have a different option field: activated by.) Figure 3. combining ease of use with power and flexibility.1.) dynamic tcp !$HOME_NET any -> $HOME_NET 143 (activated_by:1. activates:1. \ content:"|E8C0FFFFFF|/bin". activate tcp !$HOME_NET any -> $HOME_NET 143 (flags:PA.log tcp any any -> 192. In Snort versions before 1. \ msg:"IMAP buffer overflow!". there’s a very good possibility that useful data will be contained within the next 50 (or whatever) packets going to that same service port on the network. note that there is no <.5) and flowbits (3. Dynamic rules are just like log rules except are dynamically enabled when the activate rule id goes off.0/24 !6000:6010 Figure 3. Put ’em together and they look like Figure 3.does not exist is so that rules always read consistently. Rule option keywords are separated from their arguments with a colon (:) character.3 Rule Options Rule options form the heart of Snort’s intrusion detection engine. This is handy for recording/analyzing both sides of a conversation.10). which is indicated with a <> symbol. 3.operator.7: Activate/Dynamic Rule Example 138 .6. An example of the bidirectional operator being used to record both sides of a telnet session is shown in Figure 3. The reason the <.7.7. If the buffer overflow happened and was successful. so there’s value in collecting those packets for later analysis. and the address and port information on the right side of the operator is the destination host. These rules tell Snort to alert when it detects an IMAP buffer overflow and collect the next 50 packets headed for port 143 coming from outside $HOME NET headed to $HOME NET.1. such as telnet or POP3 sessions. Activate rules are just like alerts but also tell Snort to add a rule when a specific network event occurs.7. except they have a *required* option field: activates. This is very useful if you want to set Snort up to perform follow on recording when a specific rule goes off.5: Example of Port Negation log tcp !192.0/24 23 Figure 3. count:50.6: Snort rules using the Bidirectional Operator host.) character. Activate/dynamic rule pairs give Snort a powerful capability.168. Dynamic rules act just like log rules.168. Dynamic rules have a second required field as well.6 Activate/Dynamic Rules ! △NOTE Activate and Dynamic rules are being phased out in favor of a combination of tagging (3. nessus.com/bid/. content:"|fff4 fffd 06|".There are four major categories of rule options.4). This plugin is to be used by output plugins to provide a link to additional information about the alert produced.IDS411. reference:arachnids.cgi/ for a system that is indexing descriptions of alerts based on of the sid (See Section 3.4.php3?id= (currently down). <id>. Make sure to also take a look at General Rule Options 3.” 3. character).snort.mitre. It is a simple text string that utilizes the \ as an escape character to indicate a discrete character that might otherwise confuse Snort’s rules parser (such as the semi-colon .com/vil/content/v: Supported Systems URL Prefix.. Format msg:"<message text>".2 reference The reference keyword allows rules to include references to external attack identification systems. \ 139 .com/info/IDS msg The msg rule option tells the logging and alerting engine the message to print along with a packet dump or to an alert.) alert tcp any any -> any 21 (msg:"IDS287/ftp-wuftp260-venglin-linux".] Examples alert tcp any any -> any 7070 (msg:"IDS411/dos-realaudio". Table 3. \ flags:AP. <id>.cgi?name=. The plugin currently supports several specific systems as well as unique URLs. 3.whitehats.org/show/osvdb/ http:// System bugtraq cve nessus arachnids mcafee osvdb url Format reference:<id system>. [reference:<id system>. This option should be used with the rev keyword.map contains a mapping of alert messages to Snort rule IDs. Example This example is a rule with a generator id of 1000001.4. Example This example is a rule with the Snort Rule ID of 1000983.4. alert tcp any any -> any 80 (content:"BOB".) 3. \ reference:cve. it is recommended that values starting at 1. it will default to 1 and the rule will be part of the general rule subsystem.) 140 . This information allows output plugins to identify rules easily. To avoid potential conflict with gids defined in Snort (that for some reason aren’t noted it etc/generators). For example gid 1 is associated with the rules subsystem and various gids over 100 are designated for specific preprocessors and the decoder.map contains contains more information on preprocessor and decoder gids. Format sid:<snort rules id>.000. This information is useful when postprocessing alert to map an ID to an alert message. alert tcp any any -> any 80 (content:"BOB".IDS287.) 3. Format gid:<generator id>. rev:1. reference:bugtraq.5) • <100 Reserved for future use • 100-999. rev:1. sid:1.4. (See section 3.000 Used for local rules The file sid-msg.4 sid The sid keyword is used to uniquely identify Snort rules. See etc/generators in the source tree for the current generator ids in use.flags:AP. \ reference:arachnids.000 be used. For general rule writing. sid:1000983. (See section 3. gid:1000001.1387. Note that the gid keyword is optional and if it is not specified in a rule.4) The file etc/gen-msg. This option should be used with the sid keyword. it is not recommended that the gid keyword be used.000.4.3 gid The gid keyword (generator id) is used to identify what part of Snort generates the event when a particular rule fires. content:"|31c031db 31c9b046 cd80 31c031db|".CAN-2000-1574.999 Rules included with the Snort distribution • >=1. This option should be used with the sid keyword. sid:1000983.) 3. noc . (See section 3. \ content:"expn root".6 classtype The classtype keyword is used to categorize a rule as detecting an attack that is part of a more general type of attack class. Defining classifications for rules provides a way to better organize the event data Snort produces. along with Snort rule id’s. flags:A+. Example alert tcp any any -> any 25 (msg:"SMTP expn root".4.4) Format rev:<revision integer>.3.<default priority> These attack classifications are listed in Table 3. A priority of 1 (high) is the most severe and 4 (very low) is the least severe. Snort provides a default set of attack classes that are used by the default set of rules it provides.2.5 rev The rev keyword is used to uniquely identify revisions of Snort rules. Format classtype:<class name>. Table 3.4.) Attack classifications defined by Snort reside in the classification. Example This example is a rule with the Snort Rule Revision of 1. The file uses the following syntax: config classification: <class name>.4.config file. Revisions. rev:1.<class description>. classtype:attempted-recon. alert tcp any any -> any 80 (content:"BOB". They are currently ordered with 4 default priorities. allow signatures and descriptions to be refined and replaced with updated information. 4. flags:A+.). \ dsize:>128. Snort provides a default set of classifications in classification. Examples alert tcp any any -> any 80 (msg:"WEB-MISC phf attempt". priority:10 ). priority:10. Format priority:<priority integer>. 142 . Examples of each case are given below.conf by using the config classification option. 3. classtype:attempted-admin. \ content:"/cgi-bin/phf". Examples alert tcp any any -> any 80 (msg:"Shared Library Rule Example". See Section 2. with keys separated by commas.. with a key and a value.) alert tcp any any -> any 80 (msg:"Shared Library Rule Example".4. Keys other than those listed in the table are effectively ignored by Snort and can be free-form.3. Format The examples below show an stub rule from a shared library rule. \ metadata:engine shared. otherwise.) alert tcp any any -> any 80 (msg:"HTTP Service Rule Example". Multiple keys are separated by a comma. metadata:key1 value1. 143 . The reference keyword allows rules to include references to external attack identification systems. \ metadata:service http.9 General Rule Quick Reference Table 3. key2 value2. metadata:key1 value1. Certain metadata keys and values have meaning to Snort and are listed in Table 3.4: General rule option keywords Keyword msg reference gid Description The msg keyword tells the logging and alerting engine the message to print with the packet dump or alert.7 for details on the Host Attribute Table.3. soid 3|12345. the second a single metadata keyword. Table 3. The gid keyword (generator id) is used to identify what part of Snort generates the event when a particular rule fires. the rule is not applied (even if the ports specified in the rule match). The first uses multiple metadata keywords.4. while keys and values are separated by a space. metadata:soid 3|12345. \ metadata:engine shared.) 3.8 metadata The metadata tag allows a rule writer to embed additional information about the rule. When the value exactly matches the service ID as specified in the table. the rule is applied to that packet. . sid rev classtype priority metadata The sid keyword is used to uniquely identify Snort rules. Note that multiple content rules can be specified in one rule. \ " Format content:[!]"<content string>". the result will return a match. If there must be 50 bytes for a valid match. If the rule is preceded by a !.) ! △NOTE A ! modifier negates the results of the entire content search. For example. Whenever a content option pattern match is performed. it can contain mixed text and binary data. the Boyer-Moore pattern match function is called and the (rather computationally expensive) test is performed against the packet contents. This allows rules to be tailored for less false positives. 3. The classtype keyword is used to categorize a rule as detecting an attack that is part of a more general type of attack class. the test is successful and the remainder of the rule option tests are performed.5. and there are only 5 bytes of payload and there is no ”A” in those 5 bytes. the alert will be triggered on packets that do not contain this content. The option data for the content keyword is somewhat complex. typically in a key-value format. The metadata keyword allows a rule writer to embed additional information about the rule. 144 .1 content The content keyword is one of the more important features of Snort. Be aware that this test is case sensitive. The priority keyword assigns a severity level to rules.) alert tcp any any -> any 80 (content:!"GET". This is useful when writing rules that want to alert on packets that do not match a certain pattern ! △NOTE Also note that the following characters must be escaped inside a content rule: . Examples alert tcp any any -> any 139 (content:"|5c 00|P|00|I|00|P|00|E|00 5c|". The binary data is generally enclosed within the pipe (|) character and represented as bytecode. If data exactly matching the argument data string is contained anywhere within the packet’s payload. use isdataat as a pre-cursor to the content. if using content:!"A". modifiers included. It allows the user to set rules that search for specific content in the packet payload and trigger response based on that data. within:50. The example below shows use of mixed text and binary data in a Snort rule.5 Payload Detection Rule Options 3. Bytecode represents binary data as hexadecimal numbers and is a good shorthand method for describing complex binary data. The rev keyword is used to uniquely identify revisions of Snort rules. 9 http raw cookie 3.2 rawbytes 3.14 http raw uri 3.13 http uri 3. 145 .5.2 nocase The nocase keyword allows the rule writer to specify that the Snort should look for the specific pattern. content:"USER root".5.5. format rawbytes.5.5. ignoring any decoding that was done by preprocessors.1 option.5. nocase.19 3.5.5.5.5.8 http cookie 3.11 http raw header 3.5.6 within 3.16 http stat msg 3. Example alert tcp any any -> any 21 (msg:"FTP ROOT".5.5.12 http method 3.5.Changing content behavior The content keyword has a number of modifier keywords. The modifier keywords change how the previously specified content works.5.5.15 http stat code 3.4 offset 3.7 http client body 3.5: Content Modifiers Modifier Section nocase 3.5 distance 3.5.5. Format nocase. This acts as a modifier to the previous content 3.) 3.17 fast pattern 3. nocase modifies the previous content keyword in the rule.5.10 http header 3.3 rawbytes The rawbytes keyword allows rules to look at the raw packet data.3 depth 3. These modifier keywords are: Table 3.5. ignoring case. offset:4. A depth of 5 would tell Snort to only look for the specified pattern within the first 5 bytes of the payload.5. distance. The offset and depth keywords may be used together. there must be a content in the rule before offset is specified. offset. As this keyword is a modifier to the previous content keyword. offset modifies the previous ’content’ keyword in the rule. or within (to modify the same content).5. depth:20. The value can also be set to a string value referencing a variable extracted by the byte extract keyword in the same rule. The value can also be set to a string value referencing a variable extracted by the byte extract keyword in the same rule. Example The following example shows use of a combined content. You can not use offset with itself. instead of the decoded traffic provided by the Telnet decoder. distance. alert tcp any any -> any 80 (content:"cgi-bin/phf". Format depth:[<number>|<var_name>]. rawbytes. there must be a content in the rule before depth is specified. and depth search rule. 3. The minimum allowed value is 1.5 offset The offset keyword allows the rule writer to specify where to start searching for a pattern within a packet.) 146 . or within (to modify the same content). As the depth keyword is a modifier to the previous content keyword. This keyword allows values from -65535 to 65535.Example This example tells the content pattern matcher to look at the raw traffic. Format offset:[<number>|<var_name>]. You can not use depth with itself. content:"|FF F1|". The maximum allowed value for this keyword is 65535. alert tcp any any -> any 21 (msg:"Telnet NOP".) 3. This keyword allows values greater than or equal to the pattern length being searched.4 depth The depth keyword allows the rule writer to specify how far into a packet Snort should search for the specified pattern. An offset of 5 would tell Snort to start looking for the specified pattern after the first 5 bytes of the payload. The offset and depth keywords may be used together. depth modifies the previous ‘content’ keyword in the rule. within:10.5.) 147 .5. The distance and within keywords may be used together. offset.1 ). The maximum allowed value for this keyword is 65535. distance:1.5.3. The value can also be set to a string value referencing a variable extracted by the byte extract keyword in the same rule.7 within The within keyword is a content modifier that makes sure that at most N bytes are between pattern matches using the content keyword ( See Section 3. alert tcp any any -> any any (content:"ABC".5). Format distance:[<byte_count>|<var_name>]. content:"EFG". This keyword allows values greater than or equal to pattern length being searched.).6) rule option. This keyword allows values from -65535 to 65535. Example The rule below maps to a regular expression of /ABC.5. You can not use within with itself. It’s designed to be used in conjunction with the distance (Section 3. Examples This rule constrains the search of EFG to not go past 10 bytes past the ABC match. The value can also be set to a string value referencing a variable extracted by the byte extract keyword in the same rule. or depth (to modify the same content). content:"DEF". The distance and within keywords may be used together. alert tcp any any -> any any (content:"ABC". offset.5. or depth (to modify the same content). except it is relative to the end of the last pattern match instead of the beginning of the packet. Format within:[<byte_count>|<var_name>].{1}DEF/. You can not use distance with itself. 6). content:"EFG". 3.) ! △NOTE The http cookie modifier is not allowed to be used with the rawbytes or fast pattern modifiers for the same content. If enable cookie is not specified. the cookie still ends up in HTTP header.) ! △NOTE The http client body modifier is not allowed to be used with the rawbytes modifier for the same content.2. Examples This rule constrains the search for the pattern ”EFG” to the extracted Cookie Header field of a HTTP client request.8 http client body The http client body keyword is a content modifier that restricts the search to the body of an HTTP client request.2. alert tcp any any -> any 80 (content:"ABC". per the configuration of HttpInspect (see 2. http_cookie. using http cookie is the same as using http header. As this keyword is a modifier to the previous content keyword. 148 . Examples This rule constrains the search for the pattern ”EFG” to the raw body of an HTTP client request. The cookie buffer also includes the header name (Cookie for HTTP requests or Set-Cookie for HTTP responses). The amount of data that is inspected with this option depends on the post depth config option of HttpInspect. content:"EFG".5.6). http_client_body. As this keyword is a modifier to the previous content keyword. This keyword is dependent on the enable cookie config option. The Cookie Header field will be extracted only when this option is configured.3. alert tcp any any -> any 80 (content:"ABC".5. The extracted Cookie Header field may be NORMALIZED. Format http_client_body. there must be a content in the rule before http cookie is specified. there must be a content in the rule before ’http client body’ is specified. Format http_cookie. Pattern matches with this keyword wont work when post depth is set to . When enable cookie is not specified. 5.) ! △NOTE The http header modifier is not allowed to be used with the rawbytes modifier for the same content. there must be a content in the rule before http header is specified. The Cookie Header field will be extracted only when this option is configured. As this keyword is a modifier to the previous content keyword.2. content:"EFG".6). Format http_raw_cookie.6). http_raw_cookie. Format http_header. http cookie or fast pattern modifiers for the same content.2. Examples This rule constrains the search for the pattern ”EFG” to the extracted Unnormalized Cookie Header field of a HTTP client request. http_header. there must be a content in the rule before http raw cookie is specified.2. 3. alert tcp any any -> any 80 (content:"ABC".5. The extracted Header fields may be NORMALIZED.) ! △NOTE The http raw cookie modifier is not allowed to be used with the rawbytes. 149 . per the configuration of HttpInspect (see 2.3.. As this keyword is a modifier to the previous content keyword.6). alert tcp any any -> any 80 (content:"ABC". ) ! △NOTE header modifier is not allowed to be used with the rawbytes. Examples This rule constrains the search for the pattern ”GET” to the extracted Method from a HTTP client request. http_method.2. As this keyword is a modifier to the previous content keyword. http header or fast pattern The http raw modifiers for the same content. Format http_method.5. 3.3. there must be a content in the rule before http raw header . http_raw_header. Format http_raw_header. alert tcp any any -> any 80 (content:"ABC". 150 . there must be a content in the rule before http method is specified.20). content:"GET".6). Examples This rule constrains the search for the pattern ”EFG” to the extracted Header fields of a HTTP client request or a HTTP server response. The http method 3. As this keyword is a modifier to the previous content keyword.5. alert tcp any any -> any 80 (content:"ABC". content:"EFG".5. Using a content rule option followed by a http uri modifier is the same as using a uricontent by itself (see: 3. As this keyword is a modifier to the previous content keyword.) ! △NOTE modifier is not allowed to be used with the rawbytes modifier for the same content.5. there must be a content in the rule before http uri is specified. http_uri. http_raw_uri.2.15 http raw uri The http raw uri keyword is a content modifier that restricts the search to the UNNORMALIZED request URI field . Examples This rule constrains the search for the pattern ”EFG” to the NORMALIZED URI. alert tcp any any -> any 80 (content:"ABC". Format http_stat_code. http uri or fast pattern modifiers for the same content. 151 .. there must be a content in the rule before http raw uri is specified.) ! △NOTE The http uri modifier is not allowed to be used with the rawbytes modifier for the same content. Examples This rule constrains the search for the pattern ”EFG” to the UNNORMALIZED URI. As this keyword is a modifier to the previous content keyword. Format http_raw_uri. alert tcp any any -> any 80 (content:"ABC". 3. 3.5.Format http_uri.5. content:"EFG".) ! △NOTE The http raw uri modifier is not allowed to be used with the rawbytes.6). As this keyword is a modifier to the previous content keyword. content:"EFG".16 http stat code The http stat code keyword is a content modifier that restricts the search to the extracted Status code field from a HTTP server response. ’non ascii’. There are eleven keywords associated with http encode. content:"Not Found". This rule option will not be able to detect encodings if the specified HTTP fields are not NORMALIZED. The keywords ’utf8’. there must be a content in the rule before http stat msg is specified. Examples This rule constrains the search for the pattern ”Not Found” to the extracted Status Message field of a HTTP server response.18 http encode The http encode keyword will enable alerting based on encoding type present in a HTTP client request or a HTTP server response (per the configuration of HttpInspect 2.2. The keyword ’cookie’ is dependent on config options ’enable cookie’ and ’normalize cookies’ (see 2. 152 . ’base36’. The config option ’normalize headers’ needs to be turned on for rules to work with the keyword ’header’. As this keyword is a modifier to the previous content keyword. http_stat_msg.6). ’header’ and ’cookie’ determine the HTTP fields used to search for a particular encoding type. The Status Message field will be extracted only if the extended reponse inspection is configured for the HttpInspect (see 2. These keywords can be combined using a OR operation. 3. ’ascii’. ’uencode’.2.6). alert tcp any any -> any 80 (content:"ABC". Format http_stat_msg. The keywords ’uri’.) ! △NOTE The http stat code modifier is not allowed to be used with the rawbytes or fast pattern modifiers for the same content. 3. Negation is allowed on these keywords. alert tcp any any -> any 80 (content:"ABC".5.2.Examples This rule constrains the search for the pattern ”200” to the extracted Status Code field of a HTTP server response. content:"200".6).17 http stat msg The http stat msg keyword is a content modifier that restricts the search to the extracted Status Message field from a HTTP server response.5.) ! △NOTE The http stat msg modifier is not allowed to be used with the rawbytes or fast pattern modifiers for the same content. ’double encode’. ’iis encode’ and ’bare byte’ determine the encoding type which would trigger the alert. http_stat_code. there must be a content rule option in the rule before fast pattern is specified.utf8|uencode. Since the default behavior of fast pattern determination is to use the longest content in the rule. As this keyword is a modifier to the previous content keyword. 153 . it is useful if a shorter content is more ”unique” than the longer content. http stat msg. Note. it can significantly reduce the number of rules that need to be evaluated and thus increases performance.19 fast pattern The fast pattern keyword is a content modifier that sets the content within a rule to be used with the fast pattern matcher. The fast pattern option may be specified only once per rule. that it is okay to use the fast pattern modifier if another http content modifier not mentioned above is used in combination with one of the above to modify the same content. Though this may seem to be overhead. http_encode:uri. ! △NOTE The fast pattern modifier cannot be used with the following http content modifiers: http cookie. [!][<utf8|double_encode|non_ascii|base36|uencode|bare_byte|ascii|iis_e Examples alert tcp any any -> any any (msg:"UTF8/UEncode Encoding present". 3. The better the content used for the fast pattern matcher. http_encode:uri.) alert tcp any any -> any any (msg:"No UTF8". however. http stat code. http raw cookie. The OR and negation operations work only on the encoding type field and not on http buffer type field.5.!utf8. [!]<encoding type> http_encode:[uri|header|cookie].) ! △NOTE Negation(!) and OR(|) operations cannot be used in conjunction with each other for the http encode keyword. meaning the shorter content is less likely to be found in a packet than the longer content. http raw uri. http raw header. the less likely the rule will needlessly be evaluated.>. The fast pattern matcher is used to select only those rules that have a chance of matching by using a content in the rule for selection and only evaluating that rule if the content is found in the payload. As such if you are writing rules that include things that are normalized. When used alone.5. the URI: 154 . This is useful if the pattern is very long and only a portion of the pattern is necessary to satisfy ”uniqueness” thus reducing the memory required to store the entire pattern in the fast pattern matcher. The reason is that the things you are looking for are normalized out of the URI buffer. ! △NOTE arguments only and <offset>. if a known content must be located in the payload independent of location in the payload. The optional argument <offset>.! △NOTE modifier can be used with negated contents only if those contents are not modified with The fast pattern offset. Format The fast pattern option can be used alone or optionally take arguments. fast_pattern:only. content:"IJKLMNO". distance or within. (2) negated contents cannot be used and (3) contents cannot have any positional modifiers such as offset. nocase.) 3. alert tcp any any -> any 80 (content:"ABCDEFGH". alert tcp any any -> any 80 (content:"ABCDEFGH". as it saves the time necessary to evaluate the rule option. fast_pattern:only. fast_pattern:<offset>. This is equivalent to using the http uri modifier to a content keyword. fast_pattern.20 uricontent The uricontent keyword in the Snort rule language searches the NORMALIZED request URI field. but still evaluate the content rule option as ”IJKLMNO”. these rules will not alert.5. The optional argument only can be used to specify that the content should only be used for the fast pattern matcher and should not be evaluated as a rule option. fast_pattern. fast_pattern:1. The optional Examples This rule causes the pattern ”IJKLMNO” to be used with the fast pattern matcher. Note that (1) the modified content must be case insensitive since patterns are inserted into the pattern matcher in a case insensitive manner. even though it is shorter than the earlier pattern ”ABCDEFGH”.<length> are mutually exclusive. for example.) This rule says to use the content ”IJKLMNO” for the fast pattern matcher and that the content should only be used for the fast pattern matcher and not evaluated as a content rule option. such as %2f or directory traversals. content:"IJKLMNO". the meaning is simply to use the specified content as the fast pattern content for the rule. depth. distance or within. This is useful.) This rule says to use ”JKLMN” as the fast pattern content. For example. alert tcp any any -> any 80 (content:"ABCDEFGH".<length> can be used to specify that only a portion of the content should be used for the fast pattern matcher. content:"IJKLMNO". depth.<length>. .. 3.exe?/c+ver Another example.exe?/c+ver will get normalized into: /winnt/system32/cmd.5. urilen:[<|>]<number>./scripts/. Format uricontent:[!]"<content string>".5.2. the maximum length.2 depth 3.6: Uricontent Modifiers Modifier Section nocase 3. write the content that you want to find in the context that the URI will be normalized. if Snort normalizes directory traversals./winnt/system32/cmd. You can write rules that look for the non-normalized content by using the content option. ! △NOTE cannot be modified by a rawbytes modifier or any of the other HTTP modifiers.%252fp%68f? will get normalized into: /cgi-bin/phf? When writing a uricontent rule.1) uricontent can be used with several of the modifiers available to the content keyword. (See Section 3. do not include directory traversals.5.5.19 This option works in conjunction with the HTTP Inspect preprocessor specified in Section 2.6.5. or range of URI lengths to match. For example.7 fast pattern 3. These include: Table 3.4 offset 3.5.5.%c0%af. the minimum length.5 distance 3. the URI: /cgi-bin/aaaaaaaaaaaaaaaaaaaaaaaaaa/.6 within 3. If you wish to uricontent search the UNNORMALIZED request URI field.5. Format urilen:min<>max.21 urilen The urilen keyword in the Snort rule language specifies the exact length. use the http raw uri modifier with a content option. The following example will match URIs that are 5 bytes long: 155 .. ignoring any decoding that was done by the preprocessors. When the rawbytes modifier is specified with isdataat.org Format pcre:[!]"(/<regex>/|m<delim><regex><delim>)[ismxAEGRUBPHMCOIDKYS]". The following example will match URIs that are greater than 5 bytes and less than 10 bytes: urilen:5<>10. 156 .relative. isdataat:50. would alert if there were not 10 bytes after ”foo” before the payload ended. A ! modifier negates the results of the isdataat test. For more detail on what can be done via a pcre regular expression. within:50. M.7. optionally looking for data relative to the end of the previous content match. I.9 for descriptions of each modifier. Format isdataat:[!]<int>[. ! △NOTE R (relative) and B (rawbytes) are not allowed with any of the HTTP modifiers such as U. The modifiers H. This option works in conjunction with the HTTP Inspect preprocessor specified in Section 2. The post-re modifiers set compile time flags for the regular expression. \ content:!"|0a|".urilen:5.8. 3. isdataat:!10. C. P. Example alert tcp any any -> any 111 (content:"PASS". 3. relative|rawbytes].23 pcre The pcre keyword allows rules to be written using perl compatible regular expressions. D. then verifies there is at least 50 bytes after the end of the string PASS. check out the PCRE web site. S and Y.5.2. The following example will match URIs that are shorter than 5 bytes: urilen:<5. and 3. K.22 isdataat Verify that the payload has data at a specified location. For example.pcre. 3.) This rule looks for the string PASS exists in the packet.relative.5. the rule with modifiers content:"foo". This modifier will work with the relative modifier as long as the previous content match was in the raw packet data. it looks at the raw packet data. then verifies that there is not a newline character within 50 bytes of the end of the PASS string. It will alert if a certain amount of data is not present within the payload.6. See tables 3. in that it simply sets a reference for other relative rule options ( byte test. but become greedy if followed by ”?”. ˆ and $ match immediately following or immediately before any newline in the buffer. the string is treated as one big line of characters.8: PCRE compatible modifiers for pcre the pattern must match only at the start of the buffer (same as ˆ ) Set $ to match only at the end of the subject string.7: Perl compatible modifiers for pcre case insensitive include newlines in the dot metacharacter By default. 157 . 3. See 2. Format file_data. PCRE when used without a uricontent only evaluates the first URI. When m is set. file_data:mime. This file data can point to either a file or a block of data.5. whitespace data characters in the pattern are ignored except when escaped or inside a character class A E G Table 3.6 for more details.i s m x Table 3. Example This example performs a case-insensitive search for the string BLAH in the payload.7 for more details. When used with argument mime it places the cursor at the beginning of the base64 decoded MIME attachment or base64 decoded MIME body. See 2. ˆ and $ match at the beginning and ending of the string. Inverts the ”greediness” of the quantifiers so that they are not greedy by default. This option matches if there is HTTP response body or SMTP body or SMTP MIME base64 decoded data.) ! △NOTE Snort’s handling of multiple URIs with PCRE does not work as expected. byte jump. For this option to work with HTTP response. Without E. alert ip any any -> any any (pcre:"/BLAH/i". This is dependent on the SMTP config option enable mime decoding. pcre) to use. In order to use pcre to inspect all URIs.2. you must use either a content or a uricontent.24 file data This option is used to place the cursor (used to walk the packet payload in rules processing) at the beginning of either the entity body of a HTTP response or the SMTP body data. This option will operate similarly to the dce stub data option added with DCE/RPC2. certain HTTP Inspect options such as extended response inspection and inspect gzip (for decompressed gzip data) needs to be turned on. as well as the very start and very end of the buffer. $ also matches immediately before the final character if it is a newline (but not before any other newlines).2. Match unnormalized HTTP request body (Similar to http client body) Match normalized HTTP request or HTTP response header (Similar to http header).\ file_data:mime. Example alert tcp any 80 -> any any(msg:"foo at the start of http response body".) Match the decoded URI buffers (Similar to uricontent and http uri). This modifier is not allowed with the unnormalized HTTP request uri buffer modifier(I) for the same content.) 3. content:"foo". \ file_data. within:10.) alert tcp any any -> any any(msg:"MIME BASE64 Encoded Data". content:"foo". Match unnormalized HTTP request or HTTP response cookie (Similar to http raw cookie).1.R U I P H D M C K S Y B O Table 3. ][offset <offset>[.25 base64 decode This option is used to decode the base64 encoded data. ! △NOTE Multiple base64 encoded attachments in one packet are pipelined. It completely ignores the limits while evaluating the pcre pattern specified. (Similar to distance:0.. nocase. This option unfolds the data before decoding it. Format base64_decode[:[bytes <bytes_to_decode>][. Match normalized HTTP request method (Similar to http method) Match normalized HTTP request or HTTP response cookie (Similar to http cookie). This option is particularly useful in case of HTTP headers such as HTTP authorization headers.3). within:3..5. relative]]]. This modifier is not allowed with the normalized HTTP request or HTTP response cookie modifier(C) for the same content. This modifier is not allowed with the unnormalized HTTP request or HTTP response header modifier(D) for the same content. 158 . Match unnormalized HTTP request or HTTP response header (Similar to http raw header). Match the unnormalized HTTP request uri buffer (Similar to http raw uri). ! △NOTE This option can be extended to protocols with folding similar to HTTP. ! △NOTE Any non-relative rule options in the rule will reset the cursor(doe ptr) from base64 decode buffer. This option needs to be used in conjunction with base64 data for any other relative rule options to work on base64 decoded buffer. Determines the offset relative to the doe ptr when the option relative is specified or relative to the start of the packet payload to begin inspection of base64 encoded data. byte jump. content:"NTLMSSP". pcre) to use. \ base64_decode:bytes 12. Format base64_data. This argument takes positive and non-zero values only.) alert tcp $EXTERNAL_NET any -> $HOME_NET any \ (msg:"Authorization NTLM". This option matches if there is base64 decoded buffer. Fast pattern content matches are not allowed with this buffer. \ content:"NTLMSSP". Specifies the inspection for base64 encoded data is relative to the doe ptr.Option bytes offset relative Description Number of base64 encoded bytes to decode. If folding is not present the search for base64 encoded data will end when we see a carriage return or line feed or both without a following space or tab. base64_data. content:"Authorization: NTLM". relative. This option will operate similarly to the file data option. 159 . base64_decode:relative. \ within:20. offset 6. The rule option base64 decode needs to be specified before the base64 data option. Examples alert tcp $EXTERNAL_NET any -> $HOME_NET any \ (msg:"Base64 Encoded Data". \ content:"foo bar".5. http_header. This argument takes positive and non-zero values only. \ content:"Authorization:". base64_data.) 3. base64_decode.) alert tcp any any -> any any (msg:"Authorization NTLM". This option does not take any arguments. within:20. base64_data. in that it simply sets a reference for other relative rule options ( byte test. within:8. The above arguments to base64 decode are optional. 2. then the operator is set to =. If ! is specified without an operator. please read Section 3.greater than • = . Capable of testing binary values or converting representative byte strings to their binary equivalent and testing them. <number type>][. [!]<operator>. If the & operator is used. See section 2. Format byte_test:<bytes to convert>.4294967295 -65535 to 65535 Description Number of bytes to pick up from the packet.13 for a description and examples (2.5.Converted string data is represented in octal dce Let the DCE/RPC 2 preprocessor determine the byte order of the value to be converted. dce]. For a more detailed explanation.27 byte test Test a byte field against a specific value (with operator). \ content:"NTLMSSP".bitwise AND • ˆ .) 3. \ base64_decode:bytes 12. bytes operator value offset = = = = 1 .9.less than • > . 2 and 4.10 ’<’ | ’=’ | ’>’ | ’&’ | ’ˆ’ 0 .5.equal • & . ! △NOTE Snort uses the C operators for each of these operators. within:8.2 .13 for quick reference). string. Operation to perform to test the value: • < .Converted string data is represented in hexadecimal • dec . If used with dce allowed values are 1. http_header.} Examples alert udp $EXTERNAL_NET any -> $HOME_NET any \ 160 . The allowed values are 1 to 10 when used without dce. relative][. base64_data.Example alert tcp any any -> any any (msg:"Authorization NTLM". relative. then it would be the same as using if (data & value) { do something(). <offset> \ [. Option bytes to convert operator Any of the operators can also include ! to check if the operator is not true. \ content:"Authorization:".Process data as big endian (default) • little . <value>. <endian>][.Process data as little endian string number type Data is stored in string format in packet Type of number being read: • hex . offset 6. distance:4.) alert udp any any -> any 1238 \ (byte_test:8. dec. 20. 12.9. dec. Format byte_jump:<bytes_to_convert>. 0. =. \ msg:"got 1234567890!".5. dec. relative][. By having an option that reads the length of a portion of data. relative. <number_type>]\ [. 0. please read Section 3. bytes offset mult_value post_offset = = = = 1 .) alert udp any any -> any 1236 \ (byte_test:2. string. >. post_offset <adjustment value>][. string. 20.5.) alert udp any any -> any 1234 \ (byte_test:4. from_beginning][. For a more detailed explanation. dce]. string. align][.10 -65535 to 65535 0 . <offset> \ [. 0. 1234. \ msg:"got 1234!".) alert udp any any -> any 1235 \ (byte_test:3. \ byte_test:4. 0xdeadbeef. hex. =. The byte jump option does this by reading some number of bytes. rules can be written that skip over specific portions of length-encoded protocols and perform detection in very specific locations. 0. =.) 3. =. dec.) alert udp any any -> any 1237 \ (byte_test:10. \ msg:"got 123!". =. \ msg:"got DEADBEEF!". <endian>][. multiplier <mult_value>][.65535 -65535 to 65535 161 . move that many bytes forward and set a pointer for later detection. string. within:4. convert them to their numeric representation. then skips that far forward in the packet. 1000. 0.) alert tcp $EXTERNAL_NET any -> $HOME_NET any \ (msg:"AMD procedure 7 plog overflow". string. 1000. This pointer is known as the detect offset end pointer. \ content:"|00 04 93 F3|".28 byte jump The byte jump keyword allows rules to be written for length encoded protocols trivially. \ content:"|00 00 00 07|". within:4. distance:4. \ content:"|00 00 00 07|". string. 123. 1234567890. \ byte_test:4. relative. >. \ content:"|00 04 93 F3|". \ msg:"got 12!".(msg:"AMD procedure 7 plog overflow". or doe ptr. Data is stored in string format in packet Converted string data is represented in hexadecimal Converted string data is represented in decimal Converted string data is represented in octal Round the number of converted bytes up to the next <value>-byte boundary. instead of using hard-coded values. <endian>]\ [. The DCE/RPC 2 preprocessor must be enabled for this option to work. Process data as big endian (default) Process data as little endian Use the DCE/RPC 2 preprocessor to determine the byte-ordering.13 for a description and examples (2. They can be re-used in the same rule any number of times. align. distance:4.Option bytes to convert offset relative multiplier <value> big little string hex dec oct align from beginning post offset <value> dce Description Number of bytes to pick up from the packet. This will be used to reference the variable in other rule options. within:4. Format byte_extract:<bytes_to_extract>.29 byte extract The byte extract keyword is another useful option for writing rules against length-encoded protocols.2. relative][. Other options which use byte extract variables A byte extract rule option detects nothing by itself. <value> may be 2 or 4.2. <number_type>][. Example alert udp any any -> any 32770:34000 (content:"|00 01 86 B8|". See section 2. It reads in some number of bytes from the packet payload and saves it to a variable. relative. \ content:"|00 00 00 01|". relative. If used with dce allowed values are 1. >. align <align value>][. \ msg:"statd format string buffer overflow". 20.) 3. Skip forward or backwards (positive of negative value) by <value> number of bytes after the other jump options have been applied. These variables can be referenced later in the rule. string. \ byte_jump:4. The allowed values are 1 to 10 when used without dce. \ byte_test:4.5. Here is a list of places where byte extract variables can be used: 162 . 900. Let the DCE/RPC 2 preprocessor determine the byte order of the value to be converted. 12. <name> \ [. ! △NOTE Only two byte extract variables may be created per rule. dce] Option bytes to convert offset name relative multiplier <value> big little dce string hex dec oct align <value> Description Number of bytes to pick up from the packet Number of bytes into the payload to start processing Name of the variable. Number of bytes into the payload to start processing Use an offset relative to last pattern match Multiply the number of calculated bytes by <value> and skip forward that number of bytes. 2 and 4. Use an offset relative to last pattern match Multiply the bytes read from the packet by <value> and save that number into the variable. Its use is in extracting packet data for use in other rule options. <offset>. multiplier <multiplier value>][.13 for quick reference).. Compares ASN. Format ftpbounce. \ byte_extract:1.1 sequence right after the content “foo”. The ASN. offset:str_offset. you would say “absolute offset 0”. 0. If an option has an argument. relative_offset 0’. depth:str_depth. relative offset has one argument. content:"PORT". absolute offset has one argument.) 3. absolute_offset <value>|relative_offset <value>].established.1 type is greater than 500. \ msg:"Bad Stuff detected within field". Example alert tcp $EXTERNAL_NET any -> $HOME_NET 21 (msg:"FTP PORT bounce attempt".5. and looks for various malicious encodings. the offset number. This means that if an ASN.31 asn1 The ASN. \ flow:to_server. str_depth. sid:3441. • Use these values to constrain a pattern match to a smaller area. So if any of the arguments evaluate as true.5. you would specify ’content:"foo". \ content:"bad stuff". then this keyword is evaluated as true. but it is unknown at this time which services may be exploitable.1 type lengths with the supplied argument. The syntax looks like. value offset offset Examples This example uses two variables to: • Read the offset of a string from a byte at offset 0. • Read the depth of a string from a byte at offset 1.30 ftpbounce The ftpbounce keyword detects FTP bounce attacks.1 detection plugin decodes a packet or a portion of a packet. Option bitstring overflow double overflow oversize length <value> Description Detects invalid bitstring encodings that are known to be remotely exploitable. distance. 1. the offset value. Detects a double ASCII encoding that is larger than a standard buffer. This keyword must have one argument which specifies the length to compare against. For example. the option and the argument are separated by a space or a comma.\ classtype:misc-attack. Offset values may be positive or negative. ftpbounce. depth. This is the relative offset from the last content match or byte test/jump.) 3. double_overflow][. pcre:"/ˆPORT/smi". alert tcp any any -> any any (byte_extract:1. the whole option evaluates as true. absolute offset <value> relative offset <value> 163 . The preferred usage is to use a space between option and argument. str_offset. Offset may be positive or negative. asn1:bitstring_overflow. Format asn1:[bitstring_overflow][. rev:1. So if you wanted to start decoding and ASN. This is the absolute offset from the beginning of the packet.1 options provide programmatic detection capabilities as well as some more dynamic type detection. “oversize length 500”. Multiple options can be used in an ’asn1’ option and the implied logic is boolean OR. oversize_length <value>][. This is known to be an exploitable function in Microsoft. if you wanted to decode snmp packets. within offset.Rule Option content/uricontent byte test byte jump isdataat Arguments that Take Variables offset. nocase. 13 for a description and examples of using this rule option.34 dce opnum See the DCE/RPC 2 Preprocessor section 2. 3.) 3.5.2. e.13 for a description and examples of using this rule option.2.10: Payload detection rule option keywords Keyword content Description The content keyword allows the user to set rules that search for specific content in the packet payload and trigger response based on that data.13 for a description and examples of using this rule option. \ asn1:bitstring_overflow.5.38 Payload Detection Quick Reference Table 3.5. 3. cvs:invalid-entry.Examples alert udp any any -> any 161 (msg:"Oversize SNMP Length". \ asn1:oversize_length 10000.11. SSH (usually port 22).) alert tcp any any -> any 80 (msg:"ASN1 Relative Foo".5.g.33 dce iface See the DCE/RPC 2 Preprocessor section 2.37 ssl state See the SSL/TLS Preprocessor section 2.) 3.2. 3. absolute_offset 0. ! △NOTE This plugin cannot do detection over encrypted sessions.11 for a description and examples of using this rule option.2.32 cvs The CVS detection plugin aids in the detection of: Bugtraq-10384. Option invalid-entry Description Looks for an invalid Entry string. 3. \ flow:to_server.5.35 dce stub data See the DCE/RPC 2 Preprocessor section 2. 164 . Examples alert tcp any any -> any 2401 (msg:"CVS Invalid-entry".5. CVE-2004-0396: ”Malformed Entry Modified and Unchanged flag insertion”.2.11 for a description and examples of using this rule option. relative_offset 0. content:"foo". Format cvs:<option>.36 ssl version See the SSL/TLS Preprocessor section 2. 3.established.5. Default CVS server ports are 2401 and 514 and are included in the default ports for stream reassembly. which is a way of causing a heap overflow (see CVE-2004-0396) and bad pointer derefenece in versions of CVS 1.15 and before. 13.6. The offset keyword allows the rule writer to specify where to start searching for a pattern within a packet. See the DCE/RPC 2 Preprocessor section 2. The isdataat keyword verifies that the payload has data at a specified location.2. =. and looks for various malicious encodings. The uricontent keyword in the Snort rule language searches the normalized request URI field. Example This example checks for a time-to-live value that is less than 3. you could use the fragbits keyword and look for the More fragments option in conjunction with a fragoffset of 0.13. The distance keyword allows the rule writer to specify how far into a packet Snort should ignore before starting to search for the specified pattern relative to the end of the previous pattern match. The pcre keyword allows rules to be written using perl compatible regular expressions. >=]<number>. >.13. This keyword takes numbers from 0 to 255. The asn1 detection plugin decodes a packet or a portion of a packet. 165 . To catch all the first fragments of an IP session.6.6 Non-Payload Detection Rule Options 3. This option keyword was intended for use in the detection of traceroute attempts. See the DCE/RPC 2 Preprocessor section 2. Example alert ip any any -> any any \ (msg:"First Fragment".) 3. fragbits:M. ttl:<3. See the DCE/RPC 2 Preprocessor section 2. The ftpbounce keyword detects FTP bounce attacks. Format ttl:[<.1 fragoffset The fragoffset keyword allows one to compare the IP fragment offset field against a decimal value. Format fragoffset:[!|<|>]<number>. The within keyword is a content modifier that makes sure that at most N bytes are between pattern matches using the content keyword.2 ttl The ttl keyword is used to check the IP time-to-live value. The cvs keyword detects invalid entry strings. The byte jump keyword allows rules to read the length of a portion of data. This example checks for a time-to-live value that between 3 and 5. ignoring any decoding that was done by preprocessors. then skip that far forward in the packet. <=. 3. The depth keyword allows the rule writer to specify how far into a packet Snort should search for the specified pattern.2. The byte test keyword tests a byte field against a specific value (with operator). ttl:[<number>]-[<number>]. fragoffset:0.2. 4 id The id keyword is used to check the IP ID field for a specific value. ttl:5-. 3. Some tools (exploits.6. the value 31337 is very popular with some hackers. Example This example looks for a tos value that is not 4 tos:!4. This example checks for a time-to-live value that between 5 and 255. ttl:=5.6. This example checks for a time-to-live value that between 0 and 5. The following examples are NOT allowed by ttl keyword: ttl:=>5. for example.3 tos The tos keyword is used to check the IP TOS field for a specific value. ttl:5-3. 166 . id:31337. ttl:-5. scanners and other odd programs) set this field specifically for various purposes.ttl:3-5. ttl:>=5. Example This example looks for the IP ID of 31337. 3. Format tos:[!]<number>. Few other examples are as follows: ttl:<=5. Format id:<number>. ttl:= seq:0. Format seq:<number>. 3.6. <GROUP_NAME>].11 seq The seq keyword is used to check for a specific TCP sequence number. window:55808.12 ack The ack keyword is used to check for a specific TCP acknowledge number.) alert tcp any any -> any 143 (msg:"IMAP LIST". Format window:[!]<number>. 3.logged_in.13 window The window keyword is used to check for a specific TCP window size. ack:0. Example This example looks for a TCP window size of 55808.6. <STATE_NAME>][. 170 . flowbits:noalert. Format ack:<number>. flowbits:set.logged_in. content:"OK LOGIN". Examples alert tcp any 143 -> any any (msg:"IMAP login".Format flowbits:[set|unset|toggle|isset|isnotset|noalert|reset][. flowbits:isset. Example This example looks for a TCP acknowledge number of 0.6.) 3. Example This example looks for a TCP sequence number of 0. content:"LIST". 171 . Format icode:min<>max.16 icmp id The icmp id keyword is used to check for a specific ICMP ID value. This is useful because some covert channel programs use static ICMP fields when they communicate. 3. icmp_id:0. Example This example looks for an ICMP ID of 0.6. Format itype:min<>max. This particular plugin was developed to detect the stacheldraht DDoS agent. icode:[<|>]<number>. Format icmp_id:<number>. icode:>30. Example This example looks for an ICMP code greater than 30.17 icmp seq The icmp seq keyword is used to check for a specific ICMP sequence value.3. Example This example looks for an ICMP type greater than 30.15 icode The icode keyword is used to check for a specific ICMP code value. 3. itype:[<|>]<number>. Format icmp_seq:<number>.6. This particular plugin was developed to detect the stacheldraht DDoS agent.14 itype The itype keyword is used to check for a specific ICMP type value.6. 3.6. This is useful because some covert channel programs use static ICMP fields when they communicate. itype:>30. and procedure numbers in SUNRPC CALL requests. Format ip_proto:[!|>|<] <name or number>. icmp_seq:0.). 3.18 rpc The rpc keyword is used to check for a RPC application.) 172 .19 ip proto The ip proto keyword allows checks against the IP protocol header. For a list of protocols that may be specified by name. [<procedure number>|*]>.Example This example looks for an ICMP Sequence of 0. Example This example looks for IGMP traffic. 3.) 3.6. Warning Because of the fast pattern matching engine. Format rpc:<application number>. alert ip any any -> any any (sameip. the RPC keyword is slower than looking for the RPC values by using normal content matching. see /etc/protocols. Format sameip. Example This example looks for any traffic where the Source IP and the Destination IP is the same.6. alert tcp any any -> any 111 (rpc:100000. Wildcards are valid for both version and procedure numbers by using ’*’. Example The following example looks for an RPC portmap GETPORT request. [<version number>|*].6. *. version. 3. alert ip any any -> any any (ip_proto:igmp.20 sameip The sameip keyword allows rules to check if the source ip is the same as the destination IP. equal • != .) 3. use: alert tcp any 80 -> any any (flow:to_client. noalert][.greater than • = . • The optional noalert parameter causes the rule to not generate an alert when it matches.6.less than • > . ! △NOTE The stream size option is only available when the Stream5 preprocessor is enabled. ! △NOTE The stream reassemble option is only available when the Stream5 preprocessor is enabled.greater than or equal Example For example. The tos keyword is used to check the IP TOS field for a specific value.client. to look for a session that is less that 6 bytes from the client side.6. The ttl keyword is used to check the IP time-to-live value.3. <operator>.11: Non-payload detection rule option keywords Keyword fragoffset ttl tos Description The fragoffset keyword allows one to compare the IP fragment offset field against a decimal value. <number>. Format stream_reassemble:<enable|disable>. <server|client|both>[. to disable TCP reassembly for client traffic when we see a HTTP 200 Ok Response message. Example For example. established.21 stream reassemble The stream reassemble keyword allows a rule to enable or disable TCP stream reassembly on matching traffic.noalert. as determined by the TCP sequence numbers. use: alert tcp any any -> any any (stream_size:client. Format stream_size:<server|client|both|either>. fastpath].) 3.6.23 Non-Payload Detection Quick Reference Table 3. stream_reassemble:disable. content:"200 OK". • The optional fastpath parameter causes Snort to ignore the rest of the connection.6.less than or equal • >= .<.22 stream size The stream size keyword allows a rule to match traffic according to the number of bytes observed.not equal • <= . 173 . Where the operator is one of the following: • < . etc.7. The dsize keyword is used to test the packet payload size. The window keyword is used to check for a specific TCP window size. Format logto:"filename". Example The following example logs all printable strings in a telnet packet. The ip proto keyword allows checks against the IP protocol header. or all. HTTP CGI scans. The rpc keyword is used to check for a RPC application. The fragbits keyword is used to check if fragmentation and reserved bits are set in the IP header.7. This is especially handy for combining data from things like NMAP activity. The binary keyword prints out data in a binary format. The ipopts keyword is used to check if a specific IP option is present. The flags keyword is used to check if specific TCP flag bits are present. The printable keyword only prints out data that the user would normally see or be able to type. There are three available argument keywords for the session rule option: printable. The flow keyword allows rules to only apply to certain directions of the traffic flow. The seq keyword is used to check for a specific TCP sequence number.) Given an FTP data session on port 12345. The flowbits keyword allows rules to track states during a transport protocol session. Format session:[printable|binary|all]. It should be noted that this option does not work when Snort is in binary logging mode. The icode keyword is used to check for a specific ICMP code value. The all keyword substitutes non-printable characters with their hexadecimal equivalents. ftp. log tcp any any <> any 12345 (metadata:service ftp-data. this example logs the payload bytes in binary form. The sameip keyword allows rules to check if the source ip is the same as the destination IP. The ack keyword is used to check for a specific TCP acknowledge number. or even web sessions is very useful. 3.) 174 .7 Post-Detection Rule Options 3.2 session The session keyword is built to extract user data from TCP Sessions. and procedure numbers in SUNRPC CALL requests. The itype keyword is used to check for a specific ICMP type value.1 logto The logto keyword tells Snort to log all packets that trigger this rule to a special output log file. The icmp seq keyword is used to check for a specific ICMP sequence value. version. session:binary. binary. There are many cases where seeing what users are typing in telnet. 3. log tcp any any <> any 23 (session:printable. rlogin. The icmp id keyword is used to check for a specific ICMP ID value. only relevant if host type is used.1. type • session .7. tagged alerts will be sent to the same output plugins as the original alert.) 175 .src.Warnings Using the session keyword can slow Snort down considerably. tag:host. a tagged packet limit will be used to limit the number of tagged packets regardless of whether the seconds or bytes count has been reached. Note that neither subsequent alerts nor event filters will prevent a tagged packet from being logged.conf file (see Section 2. <count>. (Note that the tagged packet limit was introduced to avoid DoS situations on high bandwidth sensors for tag rules with a high seconds or bytes counts.1.Log packets from the host that caused the tag to activate (uses [direction] modifier) count • <integer> .4 any -> 10. does not properly handle tagged alerts. 3. flowbits:set.4 for details. The session keyword is best suited for post-processing binary (pcap) log files.seconds. Resp can be used in both passive or inline modes.6. metric • packets .1 any (flowbits:isnotset.11. Subsequent tagged alerts will cause the limit to reset.Tag packets containing the destination IP address of the packet that generated the initial event.5 tag The tag keyword allow rules to log more than just the single packet that triggered the rule.3 resp The resp keyword enables an active response that kills the offending session. additional traffic involving the source and/or destination host is tagged. Doing this will ensure that packets are tagged for the full amount of seconds or bytes and will not be cut off by the tagged packet limit.1. • dst . Once a rule is triggered.4 react The react keyword enables an active response that includes sending a web page or other content to the client and then closing the connection.1. and Stream reassembly will cause duplicate data when the reassembled packets are logged.1.tagged.7. described in Section 2. Units are specified in the <metric> field.Tag packets containing the source IP address of the packet that generated the initial event. the database output plugin.seconds.Tag the host/session for <count> seconds • bytes .1 any \ (content:"TAGMYPACKETS". Format tag:<type>.conf to 0). Tagged traffic is logged to allow analysis of response codes and post-attack traffic.0.1. See 2.) alert tcp 10.3 on how to use the tagged packet limit config option). 3.600.7.tagged.Log packets in the session that set off the rule • host . React can be used in both passive and inline modes. The binary keyword does not log any protocol headers below the application layer. <metric>[.packets.src.3 for details.Tag the host/session for <count> packets • seconds .1. alert tcp any any <> 10. so it should not be used in heavy load situations. Currently. See 2.Tag the host/session for <count> bytes direction .600. The default tagged packet limit value is 256 and can be modified by using a config option in your snort.11.) Also note that if you have a tag option in a rule that uses a metric other than packets. tag:host. • src . but it is the responsibility of the output plugin to properly handle these special alerts. 3.6. direction]. seconds. seconds <s>.10 detection filter detection filter defines a rate which must be exceeded by a source or destination host before a rule can generate an event. 3.6 activates The activates keyword allows the rule writer to specify a rule to add when a specific network event occurs.1.1.100 22 ( \ msg:"SSH Brute Force Attempt". \ detection_filter:track by_src. 3.) 3. count 30. 3.1. Format activated_by:1.7.7. \ count <c>.2. See Section 3. seconds 60. detection filter has the following format: detection_filter: \ track <by_src|by_dst>. rev:1. Both the new string and the content it is to replace must have the same length.Example This example logs the first 10 seconds or the tagged packet limit (whichever comes first) of any telnet session.9 replace The replace keyword is a feature available in inline mode which will cause Snort to replace the prior matching content with the given string. alert tcp any any -> any 23 (flags:s.7. count:50. after evaluating all other rule options (regardless of the position of the filter within the rule source).this rule will fire on every failed login attempt from 10.6 for more information. tag:session.100 any > 10. Format activates:1.1.to_server. It allows the rule writer to specify how many packets to leave the rule enabled for after it is activated.6 for more information.10.7 activated by The activated by keyword allows the rule writer to dynamically enable a rule when a specific activate rule is triggered. Format activated_by:1. depth:4.2. \ sid:1000001. See Section 3. \ content:"SSH". See Section 3. replace:"<string>". Snort evaluates a detection filter as the last step of the detection phase.12.100 during one sampling period of 60 seconds. Example .2.7. one per content. offset:0. after the first 30 failed login attempts: drop tcp 10.7.8 count The count keyword must be used in combination with the activated by keyword.6 for more information.2. flow:established. 3. At most one detection filter is permitted per rule.2. nocase.) 176 . You can have multiple replacements within a rule. Use detection filters (3. It makes sense that the threshold feature is an integral part of this rule.7.8 Rule Thresholds ! △NOTE Rule thresholds are deprecated and will not be supported in a future release. \ track <by_src|by_dst>. This keyword implements an ability for users to react to traffic that matches a Snort rule by closing connection and sending a notice. Available in inline mode only.. established. The value must be nonzero. or event filters (2. flow:to_server. This keyword must be used in combination with the activated by keyword. Replace the prior matching content with the given string of the same length. This keyword allows the rule writer to specify a rule to add when a specific network event occurs.2) as standalone configurations instead. 3. There is a logical difference.Option track by src|by dst count c seconds s Description Rate is tracked either by source IP address or destination IP address. threshold:type limit. \ uricontent:"/robots.txt access". a rule for detecting a too many login password attempts may require more than 5 attempts.7. threshold can be included as part of a rule.10) within rules. alert tcp $external_net any -> $http_servers $http_ports \ (msg:"web-misc robots. Since potentially many events will be generated. The maximum number of rule matches in s seconds allowed before the detection filter limit to be exceeded. For instance. The resp keyword is used attempt to close sessions when an alert is triggered. This keyword allows the rule writer to dynamically enable a rule when a specific activate rule is triggered.11 Post-Detection Quick Reference Table 3. Time period over which count is accrued. or using a standalone threshold applied to the same rule.4. a detection filter would normally be used in conjunction with an event filter to reduce the number of logged events. The session keyword is built to extract user data from TCP Sessions. The tag keyword allow rules to log more than just the single packet that triggered the rule. Examples This rule logs the first event of this SID every 60 seconds. Some rules may only make sense with a threshold. track \ 177 . Format threshold: \ type <limit|threshold|both>. or you can use standalone thresholds that reference the generator and SID they are applied to. nocase.txt". This can be done using the ‘limit’ type of threshold. reference:nessus. 3. \ classtype:web-application-activity.10302. seconds <s>. \ count <c>. C must be nonzero. It allows the rule writer to specify how many packets to leave the rule enabled for after it is activated. This means count is maintained for each unique source IP address or each unique destination IP address. These should incorporate the threshold into the rule. Track by source or destination IP address and if the rule otherwise matches more than the configured rate it will fire. number of rule matching in s seconds that will cause event filter limit to be exceeded. Not the Exploit Try to write rules that target the vulnerability.9 Writing Good Rules There are some general concepts to keep in mind when developing Snort rules to maximize efficiency and speed. Ports or anything else are not tracked. udp.9. flow:to_server. 3.10302. \ uricontent:"/robots.3 Catch the Oddities of the Protocol in the Rule Many services typically send the commands in upper case letters. try and have at least one content (or uricontent) rule option in your rule.2 Catch the Vulnerability. instead of shellcode that binds a shell. rev:1.txt access". established. threshold:type both. the less likely that rule and all of it’s rule options will be evaluated unnecessarily . or destination IP address. seconds 60. the rule is less vulnerable to evasion when an attacker changes the exploit slightly. alert tcp $external_net any -> $http_servers $http_ports \ (msg:"web-misc robots. 3. threshold:type threshold.txt access". By writing rules for the vulnerability. c must be nonzero value. potentially putting a drag on performance.) This rule logs at most one event every 60 seconds if at least 10 events on this SID are fired. look for a the vulnerable command with an argument that is too large. reference:nessus. the client sends: user username_here A simple rule to look for FTP root login attempts could be: 178 . then by those with content and those without.Option type limit|threshold|both track by src|by dst count c seconds s Description type limit alerts on the 1st m events during the time interval. sid:1000852. count 10 . \ classtype:web-application-activity. 3. rate is tracked either by source IP address. tcp. \ uricontent:"/robots. nocase. perform detection in the payload section of the packet.txt". Rules without content are always evaluated (relative to the protocol and port group in which they reside). For example. icmp). rev:1. track \ by_dst. especially when applied to large rule groups like HTTP. While some detection options.9. alert tcp $external_net any -> $http_servers $http_ports \ (msg:"web-misc robots.10302. then ignores any additional events during the time interval. such as pcre and byte test.1 Content Matching Snort groups rules by protocol (ip. flow:to_server. \ classtype:web-application-activity. a multi-pattern matcher is used to select rules that have a chance at matching based on a single content. nocase. instead of a specific exploit. count 10. seconds 60 . Type threshold alerts every m times we see this event during the time interval. then by ports (ip and icmp use slightly differnet logic). reference:nessus. or for each unique destination IP addresses. then ignores events for the rest of the time interval. established. If at all possible. Selecting rules for evaluation via this ”fast” pattern matcher was found to increase performance. they are not used by the fast pattern matching engine. FTP is a good example. This means count is maintained for each unique source IP addresses.it’s safe to say there is generally more ”good” traffic than ”bad”. In FTP. time period over which count is accrued. The longer and more unique a content is. For rules with content.txt". to send the username. \ track by_dst.9. sid:1000852. Type both alerts once per time interval after seeing m occurrences of the event.) 3. s must be nonzero value. most unique string in the attack. ignoring case. Why? The content 0x13 would be found in the first byte. the content 0x13 would be found again starting after where the previous 0x13 was found. the recursion implementation is not very smart. 3. A packet of 1024 bytes of 0x13 would fail immediately. and because of recursion. \ content:"root". which is the longest. Rules that are not properly written can cause Snort to waste time duplicating checks. For example. By looking at this rule snippit. • The rule has a pcre option. Without recursion. because of recursion. content:"b".established. verifying this is traffic going to the server on an established session. dsize:1. While recursion is important for detection. and if any of the detection options after that pattern fail. as the dsize check is the first option checked and dsize is a discrete check without recursion.) There are a few important things to note in this rule: • The rule has a flow option. a packet with 1024 bytes of 0x13 could cause 1023 too many pattern match attempts and 1023 too many dsize checks.alert tcp any any -> any any 21 (content:"user root". the following rule options are not optimized: content:"|13|". even though it is obvious that the payload “aab” has “a” immediately followed by “b”. pcre:"/user\s+root/i". repeating until 0x13 is not found in the payload again. take the following rule: alert ip any any -> any any (content:"a". it is obvious the rule looks for a packet with a single byte of 0x13. The way the recursion works now is if a pattern matches. This option is added to allow the fast pattern matcher to select this rule for evaluation only if the content root is found in the payload. a good rule will handle all of the odd things that the protocol might handle when accepting the user command. the payload “aab” would fail. then look for the pattern again after where it was found the previous time. within:1. that may not sound like a smart idea. For example. Reordering the rule options so that discrete checks (such as dsize) are moved to the beginning of the rule speed up Snort. but it is needed.9. The optimized rule snipping would be: dsize:1. once it is found. For example. On first read. then the dsize option would fail. The following rule options are discrete and should generally be placed at the beginning of any rule: • dsize • flags • flow 179 . However. because the first ”a” is not immediately followed by “b”. • The rule has a content option. Repeat until the pattern is not found again or the opt functions all succeed.) This rule would look for “a”. the rule needs more smarts than a simple string match. then check the dsize again. content:"|13|". immediately followed by “b”. looking for user.) While it may seem trivial to write a rule that looks for the username root. each of the following are accepted by most FTP servers: user root user root user root user root user<tab>root To handle all of the cases that the FTP server might handle.4 Optimizing Rules The content matching portion of the detection engine has recursion to handle a few evasion cases. looking for root. followed by root. followed at least one space character (which includes tab). A good rule that looks for root login on ftp would be: alert tcp any any -> any 21 (flow:to_server. ... describe each of the fields......... a random uint32...9..... as RPC uses simple length based encoding for passing data.... unique to each request rpc type (call = 0.. . ... ../.....e./bin/sh.. .....system... and figure out how to write a rule to catch this exploit.......@(:..... There are a few things to note with RPC: • Numbers are written as uint32s..../ ............ ............... ..... . ... Let’s break this up... ........ • Strings are written as a uint32 specifying the length of the string................... 89 00 00 00 00 00 00 00 09 00 00 01 00 00 00 00 9c 00 00 87 00 00 00 00 e2 00 02 88 0a 01 01 20 the request id.metasplo it..../............................................... the string... taking four bytes...... ...metasplo it... @(:.. .. The number 26 would show up as 0x0000001a............ RPC was the protocol that spawned the requirement for these two rule options............. and then null bytes to pad the length of the string to end on a 4 byte boundary. The string “bob” would show up as 0x00000003626f6200.....5 Testing Numerical Values The rule options byte test and byte jump were written to support writing rules for protocols that have length encoded data. In order to understand why byte test and byte jump are useful.. ...../. ........ .... ........ let’s go through an exploit attempt against the sadmind service..• fragbits • icmp id • icmp seq • icode • id • ipopts • ip proto • itype • seq • session • tos • ttl • ack • window • resp • sameip 3. we need to make sure that our packet is an RPC call. depth:4. depth:4. the value we want to check. content:"|00 00 00 01|". let’s put them all together. we use: byte_jump:4. the procedure 1.align. 36 bytes from the beginning of the packet.length of verifier (0. depth:4. content:"|00 00 00 00|". making sure to account for the padding that RPC requires on strings. 181 . depth:4. we have decoded enough of the request to write our rule. content:"|00 01 87 88|". and turn those 4 bytes into an integer and jump that many bytes forward. offset:12. within:4. In english.align. offset:16. content:"|00 00 00 00|". but we want to skip over it and check a number value after the hostname. content:"|00 00 00 00|".extra group ids (0) 00 00 00 00 00 00 00 00 .unix timestamp (0x40283a10 = 1076378128 = feb 10 01:55:28 2004 gmt) 00 00 00 0a . the vulnerable procedure. offset:4. offset:20. However. we are now at: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 which happens to be the exact location of the uid. we want to read 4 bytes. Now that we have all the detection capabilities for our rule. depth:4. content:"|00 00 00 01|". This is where byte test is useful. To do that in a Snort rule. We don’t care about the hostname. content:"|00 00 00 01|". content:"|00 01 87 88|". As such.36. First. then we want to look for the uid of 0. Then. content:"|00 00 00 00|". aligning on the 4 byte boundary. we need to make sure that our packet has auth unix credentials. depth:4.gid of requesting user (0) 00 00 00 00 . turn it into a number. aka none) . offset:12. we need to make sure that our packet is a call to sadmind. offset:16. sadmind runs any request where the client’s uid is 0 as root.uid of requesting user (0) 00 00 00 00 . and jump that many bytes forward. aka none) The rest of the packet is the request that gets passed to procedure 1 of sadmind.length of the client machine name (0x0a = 10) 4d 45 54 41 53 50 4c 4f 49 54 00 00 . Then.metasploit 00 00 00 00 . we know the vulnerability is that sadmind trusts the uid coming from the client. content:"|00 00 00 01|". byte_jump:4. depth:4. offset:20. Starting at the length of the hostname. Then. within:4. depth:4.verifier flavor (0 = auth\_null. offset:4.40 28 3a 10 .36. If the sadmind service was vulnerable to a buffer overflow when reading the client’s hostname. offset:12. byte_test:4. depth:8. offset:12. 00 00 01|".>. starting 36 bytes into the packet. content:"|00 00 00 01 00 00 00 01|".200. offset:16.align.The 3rd and fourth string match are right next to each other. content:"|00 00 00 01 00 byte_jump:4.36. Our full rule would be: content:"|00 00 00 00|". 182 . offset:4.>. content:"|00 01 87 88|". and then make sure it is not too large (let’s say bigger than 200 bytes). depth:4. To do that. content:"|00 00 00 00|". depth:4. We end up with: content:"|00 00 00 00|". we would check the length of the hostname to make sure it is not too large. we would read 4 bytes. offset:4. so we should combine those patterns. we do: byte_test:4. offset:16. within:4.36. instead of reading the length of the hostname and jumping that many bytes forward. depth:4. depth:8. depth:4.200. turn it into a number.36. In Snort. content:"|00 01 87 88|". The definition of each is defined in the following sections. int build.1 DynamicPluginMeta The DynamicPluginMeta structure defines the type of dynamic module (preprocessor. and processing functions. check the appropriate header files for the current definitions. char *libraryPath. char uniqueName[MAX_NAME_LEN].h as: #define MAX_NAME_LEN 1024 #define TYPE_ENGINE 0x01 #define TYPE_DETECTION 0x02 #define TYPE_PREPROCESSOR 0x04 typedef struct _DynamicPluginMeta { int type. A shared library can implement all three types. int major. restart. the dynamic API presents a means for loading dynamic libraries and allowing the module to utilize certain functions within the main snort code. 4. fatal errors.h.2 DynamicPreprocessorData The DynamicPreprocessorData structure defines the interface the preprocessor uses to interact with snort itself. access to the StreamAPI. and path to the shared library.Chapter 4 Dynamic Modules Preprocessors. errors. It is defined in sf dynamic preprocessor. detection capabilities. 4. int minor.1 Data Structures A number of data structures are central to the API. handling Inline drops. 4. It also includes information for setting alerts. but typically is limited to a single functionality such as a preprocessor. Check the header file for the current definition. Beware: the definitions herein may be out of date. This data structure should be initialized when the preprocessor shared library is loaded. rules. } DynamicPluginMeta. and it provides access to the normalized http and alternate data buffers. and debugging info. the version information.1. and rules as a dynamic plugin to snort. This includes functions to register the preprocessor’s configuration parsing. It is defined in sf dynamic meta. or detection engine). and rules can now be developed as dynamically loadable module to snort. It includes function to log messages.1. exit. The remainder of this chapter will highlight the data structures and API functions used in developing preprocessors. 183 . detection engines. When enabled via the –enabledynamicplugin configure option. } DynamicEngineData. Check the header file for the current definitions. address and port information and rule information (classification. and a list of references). RuleOption **options. Rule The Rule structure defines the basic outline of a rule and contains the same set of information that is seen in a text rule. #endif char **debugMsgFile.1. LogMsgFunc logMsg. Additional data structures may be defined to reference other protocol fields. #ifdef HAVE_WCHAR_H DebugWideMsgFunc debugWideMsg. /* NULL terminated array of RuleOption union */ ruleEvalFunc evalFunc.h. The following structures are defined in sf snort plugin api. This includes functions for logging messages. 184 .3 DynamicEngineData The DynamicEngineData structure defines the interface a detection engine uses to interact with snort itself. #define RULE_MATCH 1 #define RULE_NOMATCH 0 typedef struct _Rule { IPInfo ip. DebugMsgFunc debugMsg. generator and signature IDs. int *debugMsgLine. It is defined in sf dynamic engine. It also includes a list of rule options and an optional evaluation function. PCRECompileFunc pcreCompile. fatal errors. classification.4.1. char *dataDumpDirectory. 4.h as: typedef struct _DynamicEngineData { int version. RuleInformation info. GetPreprocRuleOptFuncs getPreprocOptFuncs. CheckFlowbit flowbitCheck. LogMsgFunc errMsg. revision.4 SFSnortPacket The SFSnortPacket structure mirrors the snort Packet structure and provides access to all of the data contained in a given packet. PCREExecFunc pcreExec. 4. RegisterBit flowbitRegister. LogMsgFunc fatalMsg. and debugging info as well as a means to register and check flowbits. u_int8_t *altBuffer.1. It and the data structures it incorporates are defined in sf snort packet. GetRuleData getRuleData. PCREStudyFunc pcreStudy. priority. DetectAsn1 asn1Detect. It also includes a location to store rule-stubs for dynamic rules that are loaded. UriInfo *uriBuffers[MAX_URIINFOS]. and it provides access to the normalized http and alternate data buffers. SetRuleData setRuleData.5 Dynamic Rules A dynamic rule should use any of the following data structures. RegisterRule ruleRegister. That includes protocol. errors.h. src address and port.any. RuleReference The RuleReference structure defines a single rule reference. void *ruleData. /* 0 for non TCP/UDP */ } IPInfo. char * src_port. /* 0 for non TCP/UDP */ char direction. /* NULL terminated array of references */ RuleMetaData **meta. typedef struct _RuleReference { char *systemName. revision. /* String format of classification name */ u_int32_t priority. char * dst_port. char * src_addr. HTTP PORTS. typedef struct _RuleInformation { u_int32_t genID. typedef struct _IPInfo { u_int8_t protocol. char *classification. and a list of references. and direction.char initialized. u_int32_t numOptions. HOME NET. including the system name and rereference identifier. signature ID. Some of the standard strings and variables are predefined . /* NULL terminated array of references */ } RuleInformation. char *message. used internally */ /* Flag with no alert. /* non-zero is bi-directional */ char * dst_addr. /* Rule Initialized. where the parameter is a pointer to the SFSnortPacket structure. HTTP SERVERS. char noAlert. used internally */ /* Rule option count. RuleInformation The RuleInformation structure defines the meta data for a rule and includes generator ID. classification. char *refIdentifier. message text. priority. #define #define #define #define #define #define #define ANY_NET HOME_NET EXTERNAL_NET ANY_PORT HTTP_SERVERS HTTP_PORTS SMTP_SERVERS "any" "$HOME_NET" "$EXTERNAL_NET" "any" "$HTTP_SERVERS" "$HTTP_PORTS" "$SMTP_SERVERS" 185 . } RuleReference. /* } Rule. u_int32_t revision. used internally */ Hash table for dynamic data pointers */ The rule evaluation function is defined as typedef int (*ruleEvalFunc)(void *). IPInfo The IPInfo structure defines the initial matching criteria for a rule and includes the protocol. destination address and port. RuleReference **references. u_int32_t sigID. etc. u_int32_t depth. OPTION_TYPE_BYTE_JUMP. should be marked for fast pattern evaluation. OPTION_TYPE_LOOP. The most unique content. ByteExtract *byteExtract. #define NOT_FLAG 0x10000000 Some options also contain information that is initialized at run time. /* must include a CONTENT_BUF_X */ void *boyer_ptr. typedef struct _RuleOption { int optionType. OPTION_TYPE_CURSOR. and flags (one of which must specify the buffer – raw.RuleOption The RuleOption structure defines a single rule option as an option type and a reference to the data specific to that option. PreprocessorOption *preprocOpt. union { void *ptr. #define CONTENT_NOCASE #define CONTENT_RELATIVE #define CONTENT_UNICODE2BYTE 0x01 0x02 0x04 186 . } RuleOption. that which distinguishes this rule as a possible match to a packet. u_int32_t flags. It includes the pattern. OPTION_TYPE_PCRE. } option_u. u_int32_t incrementLength. FlowBitsInfo *flowBit. OPTION_TYPE_FLOWBIT. The option types and related structures are listed below. Each option has a flags field that contains specific flags for that option as well as a ”Not” flag. OPTION_TYPE_MAX }. • OptionType: Content & Structure: ContentInfo The ContentInfo structure defines an option for a content search. PCREInfo *pcre. if no ContentInfo structure in a given rules uses that flag. OPTION_TYPE_FLOWFLAGS. OPTION_TYPE_BYTE_EXTRACT. OPTION_TYPE_CONTENT. OPTION_TYPE_BYTE_TEST. OPTION_TYPE_ASN1. OPTION_TYPE_SET_CURSOR. and a designation that this content is to be used for snorts fast pattern evaluation. u_int32_t patternByteFormLength. OPTION_TYPE_HDR_CHECK. Additional flags include nocase. Asn1Context *asn1. typedef enum DynamicOptionType { OPTION_TYPE_PREPROCESSOR. relative. unicode. depth and offset. the one with the longest content length will be used. u_int8_t *patternByteForm. typedef struct _ContentInfo { u_int8_t *pattern. the integer ID for a flowbit. FlowFlags *flowFlags. } ContentInfo. URI or normalized – to search). ContentInfo *content. int32_t offset. etc. The ”Not” flag is used to negate the results of evaluating that option. such as the compiled PCRE information. CursorInfo *cursor. ByteData *byte. In the dynamic detection engine provided with Snort. Boyer-Moore content information. HdrOptCheck *hdrData. LoopInfo *loop. isnotset). which specify the direction (from server. void *compiled_expr. etc. to server). u_int32_t flags. 8_t operation. unset. u_int32_t flags.h provides flags: PCRE_CASELESS PCRE_MULTILINE PCRE_DOTALL PCRE_EXTENDED PCRE_ANCHORED PCRE_DOLLAR_ENDONLY PCRE_UNGREEDY */ typedef struct _PCREInfo { char *expr. . established session. } FlowFlags. #define ASN1_ABS_OFFSET 1 187 . and flags to specify the buffer. It includes the name of the flowbit and the operation (set.: ASN. isset. It mirrors the ASN1 rule option and also includes a flags field. • OptionType: Flow Flags & Structure: FlowFlags The FlowFlags structure defines a flow option. It includes the PCRE expression. /* must include a CONTENT_BUF_X */ } PCREInfo.1 & Structure: Asn1Context The Asn1Context structure defines the information for an ASN1 option. as defined in PCRE. void *compiled_extra. u_int32_t id. It includes the flags. pcre flags such as caseless. } FlowBitsInfo. /* pcre. • OptionType: Flowbit & Structure: FlowBitsInfo The FlowBitsInfo structure defines a flowbits option. toggle.h. u_int32_t compile_flags. and flags. similar to the isdataat rule option. a mask to ignore that part of the header field.¿. This can be used to verify there is sufficient data to continue evaluation. an offset. • OptionType: Protocol Header & Structure: HdrOptCheck The HdrOptCheck structure defines an option to check a protocol header for a specific value. a value. ¡. It includes an offset and flags that specify the buffer. The flags must specify the buffer. a value. /* u_int32_t op. int length. -.=. typedef struct _CursorInfo { int32_t offset. as well as byte tests and byte jumps. . It includes the number of bytes.etc). the operation (¡. and flags. int double_overflow. /* u_int32_t mask_value -. /* u_int32_t value. an operation (for ByteTest. int offset_type. int print.#define ASN1_REL_OFFSET 2 typedef struct _Asn1Context { int bs_overflow. The cursor is the current position within the evaluation buffer. multiplier. It includes the header field. int offset.etc). } Asn1Context. /* u_int32_t flags. } HdrOptCheck.=. as related to content and PCRE searches. /* specify one of CONTENT_BUF_X */ } CursorInfo. unsigned int max_length. u_int32_t flags. u_int32_t flags. flags. 189 .32bits is MORE than enough */ must include a CONTENT_BUF_X */ • OptionType: Byte Jump & Structure: ByteData See Byte Test above.static or reference */ char *refId. DynamicElement *increment. or extracted value */ Offset from cursor */ Used for byte jump -. /* Value of static */ int32_t *dynamicInt. For a dynamic element. /* } ByteExtract. The loop option acts like a FOR loop and includes start. end. CursorInfo *cursorAdjust. */ The ByteExtract structure defines the information to use when extracting bytes for a DynamicElement used a in Loop evaltion. } DynamicElement. specifies * relative. u_int8_t initialized. typedef struct _LoopInfo { DynamicElement *start.2 Required Functions Each dynamic module must define a set of functions and data objects to work within this framework. /* u_int32_t multiplier. u_int32_t multiplier. u_int32_t op. /* Pointer to value of dynamic */ } data. typedef struct _ByteExtract { u_int32_t bytes. u_int32_t op. /* reference ID (NULL if static) */ union { void *voidPtr. /* Holder */ int32_t staticInt. It includes whether the element is static (an integer) or dynamic (extracted from a buffer in the packet) and the value.DynamicElement The LoopInfo structure defines the information for a set of options that are to be evaluated repeatedly. for checkValue */ Value to compare value against. an offset.ByteExtract.. } LoopInfo. It includes a cursor adjust that happens through each iteration of the loop.#define CHECK_ATLEASTONE #define CHECK_NONE typedef struct _ByteData { u_int32_t bytes. /* type of this field . a reference to a RuleInfo structure that defines the RuleOptions are to be evaluated through each iteration. u_int32_t value. and a reference to the DynamicElement. for checkValue. multiplier. #define DYNAMIC_TYPE_INT_STATIC 1 #define DYNAMIC_TYPE_INT_REF 2 typedef struct _DynamicElement { char dynamicType. the value is filled by a related ByteExtract option that is part of the loop. • OptionType: Set Cursor & Structure: CursorInfo See Cursor Check above. 4. } ByteData. /* char *refId.. • OptionType: Loop & Structures: LoopInfo. and increment values as well as the comparison operation for termination. struct _Rule *subRule. /* void *memoryLocation. DynamicElement *end. u_int32_t flags. One of those options may be a ByteExtract. 9 10 /* /* /* /* /* /* Number of bytes to extract */ Type of byte comparison. /* int32_t offset. int32_t offset. flags specifying the buffer. It includes the number of bytes. It handles bounds checking for the specified buffer and returns RULE NOMATCH if the cursor is moved out of bounds. etc). – int byteJump(void *p.4. – int setCursor(void *p. byteJump. and pcreMatch to adjust the cursor position after a successful match.2. as delimited by Asn1Context and cursor. – int byteTest(void *p. u int8 t **cursor) This function adjusts the cursor as delimited by CursorInfo. This uses the individual functions outlined below for each of the rule options and handles repetitive content issues. ContentInfo* content. • int RegisterRules(Rule **) This is the function to iterate through each rule in the list. • int LibVersion(DynamicPluginMeta *) This function returns the metadata for the shared library. Value extracted is stored in ByteExtract memoryLocation parameter. u int8 t **cursor) This is a wrapper for extractValue() followed by setCursor(). and the distance option corresponds to offset. ByteData *byteData. – int checkCursor(void *p. • int ruleMatch(void *p.1 Preprocessors Each dynamic preprocessor library must define the following functions. – int checkValue(void *p. Cursor position is updated and returned in *cursor. It is also used by contentMatch. u int8 t *cursor) This function evaluates an ASN. initialize it to setup content searches. checking for the existence of the expression as delimited by PCREInfo and cursor. New cursor position is returned in *cursor. Asn1Context *asn1. drop. – int checkFlow(void *p. With a text rule. ByteData *byteData. Each of the functions below returns RULE MATCH if the option matches based on the current criteria (cursor position. CursorInfo *cursorInfo. PCREInfo *pcre. Rule *rule) This is the function to evaluate a rule if the rule does not have its own Rule Evaluation Function. and register flowbits. ByteExtract *byteExtract. as specified by FlowBitsInfo. u int8 t *cursor) This is a wrapper for extractValue() followed by checkValue().h. u int8 t **cursor) This function evaluates a single pcre for a given packet. – int processFlowbits(void *p. u int8 t *cursor) This function extracts the bytes from a given packet. ByteData *byteData. the with option corresponds to depth. Cursor position is updated and returned in *cursor.c. as specified by ByteExtract and delimited by cursor. • int LibVersion(DynamicPluginMeta *) This function returns the metadata for the shared library. 4. – int contentMatch(void *p. CursorInfo *cursorInfo. checking for the existence of that content as delimited by ContentInfo and cursor. u int32 t value.Rule **) This is the function to iterate through each rule in the list and write a rule-stop to be used by snort to control the action of the rule (alert.1 check for a given packet. • int InitializePreprocessor(DynamicPreprocessorData *) This function initializes the data structure for use by the preprocessor into a library global variable. – int extractValue(void *p. etc). FlowFlags *flowflags) This function evaluates the flow for a given packet. 190 . log. – int pcreMatch(void *p. PCRE evalution data. The metadata and setup function for the preprocessor should be defined sf preproc info. These are defined in the file sf dynamic preproc lib. u int8 t *cursor) This function validates that the cursor is within bounds of the specified buffer. u int8 t *cursor) This function compares the value to the value stored in ByteData. The sample code provided with Snort predefines those functions and defines the following APIs to be used by a dynamic rules library. – int detectAsn1(void *p. dpd and invokes the setup function. • int InitializeEngineLib(DynamicEngineData *) This function initializes the data structure for use by the engine.2. FlowBitsInfo *flowbits) This function evaluates the flowbits for a given packet. • int DumpRules(char *. u int8 t **cursor) This function evaluates a single content for a given packet.2 Detection Engine Each dynamic detection engine library must define the following functions. It will interact with flowbits used by text-based rules. – int checkHdrOpt(void *p.c. Take extra care to handle this situation and search for the matched pattern again if subsequent rule options fail to match. This should be done for both content and PCRE options.2.c and sf dynamic preproc lib.h. This preprocessor always alerts on a Packet if the TCP port matches the one configured. ! △NOTE 4. – void setTempCursor(u int8 t **temp cursor. – void revertTempCursor(u int8 t **temp cursor. • int DumpSkeletonRules() This functions writes out the rule-stubs for rules that are loaded. • int InitializeDetection() This function registers each rule in the rules library. u int8 t **cursor) This function is used to revert to a previously saved temporary cursor position. Examples are defined in the file sfnort dynamic detection lib. LoopInfo *loop. 4. u int8 t **cursor) This function iterates through the SubRule of LoopInfo. 191 . • Rule *rules[] A NULL terminated list of Rule structures that this library defines.c and is compiled together with sf dynamic preproc lib. as spepcifed by PreprocessorOption. as specified by HdrOptCheck. Cursor position is updated and returned in *cursor. • int LibVersion(DynamicPluginMeta *) This function returns the metadata for the shared library.c into lib sfdynamic preprocessor example. Cursor position is updated and returned in *cursor. #define #define #define #define MAJOR_VERSION 1 MINOR_VERSION 0 BUILD_VERSION 0 PREPROC_NAME "SF_Dynamic_Example_Preprocessor" ExampleSetup #define DYNAMIC_PREPROC_SETUP extern void ExampleSetup(). This is the metadata for this preprocessor. patterns that occur more than once may result in false negatives. The metadata and setup function for the preprocessor should be defined in sfsnort dynamic detection lib.so. Define the Setup function to register the initialization function. register flowbits. u int8 t **cursor) This function is used to handled repetitive contents to save off a cursor position temporarily to be reset at later point. If you decide to write you own rule evaluation function. – int preprocOptionEval(void *p.h are used.3 Examples This section provides a simple example of a dynamic preprocessor and a dynamic rule. HdrOptCheck *optData) This function evaluates the given packet’s protocol headers. u int8 t **cursor) This function evaluates the preprocessor defined option. etc. • int EngineVersion(DynamicPluginMeta *) This function defines the version requirements for the corresponding detection engine library. defined in sf preproc info. 4.h. The remainder of the code is defined in spp example. as delimited by LoopInfo and cursor.3. It should set up fast pattern-matcher content.3 Rules Each dynamic rules library must define the following functions.1 Preprocessor Example The following is an example of a simple preprocessor. – int loopEval(void *p. The sample code provided with Snort predefines those functions and uses the following data within the dynamic rules library. PreprocessorOption *preprocOpt. This assumes the the files sf dynamic preproc lib. "Preprocessor: Example is setup\n"). void ExampleSetup() { _dpd. DEBUG_WRAP(_dpd. if (port < 0 || port > 65535) { _dpd.).logMsg("Example dynamic preprocessor configuration\n"). char *argEnd. arg)) { arg = strtok(NULL. void ExampleInit(unsigned char *). " \t\n\r"). } The function to process the packet and log an alert if the either port matches. void *context) { SFSnortPacket *p = (SFSnortPacket *)pkt. ExampleInit). &argEnd. } The initialization function to parse the keywords from snort. } Port: %d\n". 10000).#define GENERATOR_EXAMPLE 256 extern DynamicPreprocessorData _dpd. ID 10000 */ _dpd. "Preprocessor: Example is initialized\n").debugMsg(DEBUG_PLUGIN.addPreproc(ExampleProcess. if (!arg) { _dpd. } /* Register the preprocessor function. 192 . _dpd. "\t\n\r").). void ExampleProcess(void *.conf.debugMsg(DEBUG_PLUGIN. arg).fatalMsg("ExamplePreproc: Missing port\n"). void ExampleInit(unsigned char *args) { char *arg. port).registerPreproc("dynamic_example". u_int16_t portToCheck. PRIORITY_TRANSPORT. #define SRC_PORT_MATCH 1 #define SRC_PORT_MATCH_STR "example_preprocessor: src port match" #define DST_PORT_MATCH 2 #define DST_PORT_MATCH_STR "example_preprocessor: dest port match" void ExampleProcess(void *pkt. if (!p->ip4_header || p->ip4_header->proto != IPPROTO_TCP || !p->tcp_header) { /* Not for me. DEBUG_WRAP(_dpd. Transport layer. } portToCheck = port.fatalMsg("ExamplePreproc: Invalid option %s\n". portToCheck). unsigned long port.fatalMsg("ExamplePreproc: Invalid port %d\n". _dpd.logMsg(" } else { _dpd. arg = strtok(args. void *). if(!strcasecmp("port". 10). return */ return. } port = strtoul(arg. SRC_PORT_MATCH_STR. no depth or offset. 0. 0).3. \ content:"NetBus". Per the text version. static RuleOption sid109option1 = { OPTION_TYPE_FLOWFLAGS. defined in detection lib meta.401. return.established. 0.h. { &sid109flow } }. } if (p->dst_port == portToCheck) { /* Destination port matched. return. Declaration of the data structures. case sensitive.if (p->src_port == portToCheck) { /* Source port matched. and non-relative.alertAdd(GENERATOR_EXAMPLE. } } 4. take from the current rule set. flow:from_server. reference:arachnids. 0). • Flow option Define the FlowFlags structure and its corresponding RuleOption. 1. The snort rule in normal format: alert tcp $HOME_NET 12345:12346 -> $EXTERNAL_NET any \ (msg:"BACKDOOR netbus active". classtype:misc-activity. • Content Option Define the ContentInfo structure and its corresponding RuleOption. 193 . 3. 1.alertAdd(GENERATOR_EXAMPLE. It is implemented to work with the detection engine provided with snort. /*.c. log alert */ _dpd. SRC_PORT_MATCH. static FlowFlags sid109flow = { FLOW_ESTABLISHED|FLOW_TO_CLIENT }. SID 109. content is ”NetBus”. DST_PORT_MATCH.) This is the metadata for this rule library.established. rev:5. flow is from server.2 Rules The following is an example of a simple rule. \ sid:109. 3. log alert */ _dpd. Search on the normalized buffer by default. DST_PORT_MATCH_STR. NOTE: This content will be used for the fast pattern matcher since it is the longest content option for this rule and no contents have a flag of CONTENT FAST PATTERN. search for */ boyer/moore info */ byte representation of "NetBus" */ length of byte representation */ increment length */ The list of rule options. Rule options are evaluated in the order specified. /* offset */ CONTENT_BUF_NORMALIZED. not yet initialized. static RuleOption sid109option2 = { OPTION_TYPE_CONTENT. The rule itself. /* source port(s) */ 0. /* holder for 0 /* holder for }. /* Use internal eval func */ 0. /* pattern to 0. used internally */ 0. { &sid109content } }. &sid109option2. /* Holder. used internally for flowbits */ NULL /* Holder. /* Type */ "401" /* value */ }. RuleOption *sid109options[] = { &sid109option1. with the protocol header. akin to => tcp any any -> any any */ { IPPROTO_TCP. /* proto */ HOME_NET. sid109options. /* revision */ "misc-activity". • Rule and Meta Data Define the references. /* Direction */ EXTERNAL_NET. Rule sid109 = { /* protocol header. /* holder for 0. /* flags */ NULL. used internally */ 194 . /* source IP */ "12345:12346". /* priority */ "BACKDOOR netbus active". option count. meta data (sid.use 3 to distinguish a C rule */ 109. /* destination port */ }. /* holder for NULL. /* sigid */ 5. static RuleReference *sid109refs[] = { &sid109ref_arachnids. used internally */ 0. /* Holder. /* metadata */ { 3. /* genid -. etc). /* message */ sid109refs /* ptr to references */ }. no alert. classification. static RuleReference sid109ref_arachnids = { "arachnids". NULL }. message.static ContentInfo sid109content = { "NetBus". rule data. /* classification */ 0. /* Holder. /* ptr to rule options */ NULL. NULL }. /* depth */ 0. /* destination IP */ ANY_PORT. • The List of rules defined by this rules library The NULL terminated list of rules. pcre. NULL }. flowbits. 195 . extern Rule sid109. extern Rule sid637. The InitializeDetection iterates through each Rule in the list and initializes the content. etc. &sid637. Rule *rules[] = { &sid109. Packets are then sent through the registered set of preprocessors. Similarly.sourceforge.1 Preprocessors For example. look at an existing output plugin and copy it to a new item and change a few things.Chapter 5 Snort Development Currently.1 Submitting Patches Patches to Snort should be sent to the snort-devel@lists.net mailing list. It will someday contain references on how to create new detection plugins and preprocessors. This allows this to be easily extensible. please use the HEAD branch of cvs.2.3 Output Plugins Generally. traffic is acquired from the network link via libpcap. Patches should done with the command diff -nu snort-orig snort-new. The detection engine checks each packet against the various options listed in the Snort config files. Later. Packets are passed through a series of decoder routines that first fill out the packet structure for link level protocols then are further decoded for things like TCP and UDP ports. 5. Each of the keyword options is a plugin. 5.2.2 Detection Plugins Basically. 5. It can do this by checking: if (p->tcph==null) return.h for the list of pkt * constants. Features go into HEAD. We’ve had problems in the past of people submitting patches only to the stable branch (since they are likely writing this stuff for their own IDS purposes). a TCP analysis preprocessor could simply return if the packet does not have a TCP header. this chapter is here as a place holder. Check out src/decode.2. new output plugins should go into the barnyard project rather than the Snort project. 196 . Packets are then sent through the detection engine. We are currently cleaning house on the available output options. we’ll document what these few things are. This is intended to help developers get a basic understanding of whats going on quickly.2 Snort Data Flow First. If you are going to be helping out with Snort development. there are a lot of packet flags available that can be used to mark a packet as “reassembled” or logged. Each preprocessor checks to see if this packet is something it should look at. 5. 5. End users don’t really need to be reading this section. Bug fixes are what goes into STABLE.] [3] [1] [6] [4] [5] 198 . This action might not be possible to undo. Are you sure you want to continue? We've moved you to where you read on your other device. Get the full title to continue reading from where you left off, or restart the preview.
https://www.scribd.com/doc/52871359/snort-manual
CC-MAIN-2016-50
refinedweb
57,687
60.82
This is something I picked up recently. I am unsure what the best practices are-- but the method I picked up appears to work. What you will need is a compiler (we will be using GCC), and of course, Python. We begin by writing some C code we want to compile into a library. For demonstration, I wrote some simple code. Note I didn't divide this code into a header file and a source file. That would only complicate the tutorial, but in practice you would most likely divide your code by standard C practices such as using a header file and a source file. The code I wrote is similar to Python's randint function-- we will make our own random function. test.c #include <stdlib.h> #include <time.h> #define TRUE 1 #define FALSE 0 static int randomized = FALSE; void randomize(void) { srand(time(NULL)); randomized = TRUE; } /* returns a random number between base and max, inclusive */ int get_random(int base, int max) { if (randomized == FALSE){ randomize(); } return rand() % (max - base + 1) + base; } Well-- there we have it. We have the C code, but we need to create a library (in Linux, a .so file). We can do so by compiling our code and outputting a shared library. For example, here is how you can do it with GCC: gcc -Wall -O3 -shared ~/test.c -o test.so This turns on warning, optimizes the code and exports the test code as a file called, 'test.so' into our home directory. The shared flag is what outputs the shared library. Finally, we need to use this in our Python script. The first thing we want to do is import the ctypes module. import ctypes Next we need to load our library, in Linux we will do it like so (no pun intended): my_test_lib = ctypes.cdll.LoadLibrary('/home/username/test.so') Then you can call functions inside your library by calling them through the library object, like this: my_test_lib.get_random(1, 10) Here's the complete source for the test Python code I wrote. #!/usr/bin/env python import ctypes def main(): my_test_lib = ctypes.cdll.LoadLibrary('/home/usrname/test.so') for i in range(10): # Note, this uses the Python 2 print print "Random = %d" % my_test_lib.get_random(1, 10) if __name__ == '__main__': main() Ending notes: The Windows version of this is slightly different-- you need to compile the library in a slightly different way and the way you import the library will be a little different. This post has been edited by stackoverflow: 09 November 2011 - 10:00 PM
https://www.dreamincode.net/forums/topic/252650-making-importing-and-using-a-c-library-in-python-linux-version/
CC-MAIN-2019-47
refinedweb
429
73.47
I am currently reading an analog value on my photon and can see that reading in console thanks to Ric and bdub (cheers) i would like to see that value on my website which i have created on wordpress, is this possible and how would i go about doing it. i have tryed following a few tutorials but had no successhere is my publish code sprintf(publishstring,"%f",analogcalc); Spark.publish("HeaterCurrent",publishstring); thanks.....Ben To save us from redirecting you to those same tutorials, it'd be neat if you could mention which tutorials you've followed, and what you've tried. In essence, this isn't even a Particle specific question, but rather "Wordpress server sent events". Or even lower level "running javascript in wordpress". I tried to follow Ric's tutorial where he controls a servo via a html page and tried to cut out what I didn't need. I know virtually zero about java script. I have been playing with electrons for a while now and can get them interacting with each other but whenever I try to get the data to a Webpage I get stuck. Here's where a link would've been nice. Saves us the trouble of having to guess what exactly it was you've tried (sounds more like BKO's tutorial though). Does that tutorial work (disregarding the wordpress part)? The easiest way to do this, in my opinion, is to push your data to Ubidots.com and then create a dashboard with any info you want, and then past that dashboard into your WordPress account using the iFrame embed code they have for the dashboard you create. There is a Ubidots Library in the Online IDE to get you started. Without knowing more about exactly how you want things to work, I'm wondering if you might be better off using a webhook instead of Publish. With publish, you need to have something that's monitoring your event stream as new data comes in. But with a webhook, you could post the event data to your server as they happen, and have a script that stores the information to be reviewed later. This could be done in a WordPress plugin, for instance. The plugin could be set up to watch for the webhook requests on a certain URL, store the data in the database (probably using the Settings API), and then you could have a dashboard module to display the most recent values. But if you don't need to store the data, and just want to be able to view live events as they happen, then you could probably just use Particle's JavaScript libraries for that. So, if you can tell us more about how you think you want things to work, that would be helpful. Thanks for all your helpful replies, at the moment i am going down the ubidots route. I (at the moment just want to see the live data which is being published. I am reading an analog value from a current transformenr which is connected to my heater in my office. I am an electrician however i only have basic coding knowledge so all the wiring is pretty straight forward but i just struggle a bit with the coding, im not to bad at local code controlling inputs and outputs its just when it comes to the data to web bits i struggle.i have followed the ubidots tutorial however whenn i flash the firmware my photon goes into red led state is there anythingstanding out? // This #include statement was automatically added by the Particle IDE. #include <MQTT.h> // This #include statement was automatically added by the Particle IDE. #include <UbidotsMQTT.h> // This #include statement was automatically added by the Particle IDE. #include "MQTT/MQTT.h" // This #include statement was automatically added by the Particle IDE. #include "UbidotsMQTT.h" #define TOKEN "bens token" // Add here your Ubidots TOKEN #define VARIABLE_IDENTIFIER_ONE "humidity" // Add a variable identifier, it must be in lowercase #define VARIABLE_IDENTIFIER_TWO "temperature" // Add a variable identifier, it must be in lowercase #define DATA_SOURCE_NAME "my-device" void callback(char* topic, byte* payload, unsigned int length); Ubidots client(TOKEN, callback); void callback(char* topic, byte* payload, unsigned int length) { char p[length + 1]; memcpy(p, payload, length); p[length] = NULL; String message(p); Serial.write(payload, length); Serial.println(topic); } void setup() { Serial.begin(115200); while (client.connect()); client.setDataSourceLabel(DATA_SOURCE_NAME); } void loop() { float hum = analogRead(A0); float temp = analogRead(A1); client.add(VARIABLE_IDENTIFIER_ONE, hum); client.add(VARIABLE_IDENTIFIER_TWO, temp); client.sendValues(); } cheers Ben Can you explain more precisely or post a video how this red flashing looks? Till then, just a few hints that won't change a lot, but should be kept in mind. Remove the double includes for libraries. I'd get rid of String variables and substitute them with char[] - especially in frequently called functions. String char[] Since TOKEN is a rather common term and #definesare not really good with scopes, I'd at least change that to something unique or even better replace your#define`s with TOKEN #define are not really good with scopes, I'd at least change that to something unique or even better replace your const char myToken[] = "...."; This way you'd get an error and not just a (unseen) warning in case of collisions. Slow down your loop() a bit to give the MQTT broker some slack loop() For formatting posts, these tripple grave accent need to be all alone (with the optional coding scheme) on their lines to do the formatting rightLike this ```cpp // no blanks or other text before or after ``` I have corrected that in your post already. @benjiiiiuk I would recommend not using the MQTT library and instead use the legacy library. I modified your code for you to test out. Start over using this code and enter your Token: // This #include statement was automatically added by the Particle IDE. #include <Ubidots.h> // This #include statement was automatically added by the Particle IDE. #include "Ubidots/Ubidots.h" #define TOKEN "Bens Token" // Put here your Ubidots TOKEN #define DATA_SOURCE_NAME "BensTest" Ubidots ubidots(TOKEN); // A data source with particle name will be created in your Ubidots account void setup() { Serial.begin(115200); ubidots.setDatasourceName(DATA_SOURCE_NAME); //This name will automatically show up in Ubidots the first time you post data. } void loop() { float value1 = analogRead(A0); float value2 = analogRead(A1); ubidots.add("Humidity", value1); // Change for your variable name ubidots.add("Temp", value2); ubidots.sendAll(); // Send fuel gauge data to your Ubidots account. delay(10000); //Delay sending again for 10 seconds. } Then you will need to add the legacy Ubidots library to this sketch by following the screen shots below: Choose the name of the sketch you just created so we add the library to only the new code sketch: The code should successfully compile and you should be to flash the code to a Photon or Electron. Once you send a publish even to Ubidots it should automatically create new sources using the same names in your code called BensTest, and then individual variables called Humidity & Temp. Hopefully that works, it's working for me That's great thank you very much. i know have it working with your above explanation. i kind of understand what is going on. i have also manage to incorporate this innto my website. if you use the link given by ubidots does that mean your token is not exsposed? Thanks again @benjiiiiuk When you share your dashboard using the share code your token is not exposed so nothing to worry about there. You can also setup email, or SMS alerts triggered by thresholds if you desire which is handy. Share a screen shot of your dashboard if your willing. Always nice to see how others are using stuff. Here is a screen shot of my dashboard. I found Ubidots easy to use, thanks for the advice! it seems like a good way to get your data to the web without exposing the token. Thanks again for everyone's help and advice. Whether someone has an example of how to use Ubidots with webhook?I know there is an example of only one variable but want to know how it can be used with multiple variables. @ScruffR @aguspg or @Metavix maybe able to help with the Ubidots + webhook setup. I've never used it that way before but know it's possible.
https://community.particle.io/t/wordpress-and-subscribe/29278/9
CC-MAIN-2017-26
refinedweb
1,406
61.77
and give me more feedback regarding my code for this, I'm really struggling with this code. I've even watched you tube tutorials on this but I don't seem to be getting it right, I am getting a SyntaxError.' } def morsify(string): ans = ' ' for char in string: ans.append (morse_dict[char]) return "".join(ans) print morsify("CAT") 4 Answers Chris FreemanTreehouse Moderator 67,736 Points Hey Danielle Murray, you’re on the correct path, here are the fixes: - The challenge wanted you to expand the current function and not create a new function of your own. So the defline can be removed. - To use appendthen join, the object ansneeds to be an empty list [], not a string. - Since, “string” is from removed defline, the forloop needs to use word - the joinstring needs to be a dash - instead of an empty char. - the Post back if you need more help. Good luck!!! Danielle Murray4,947 Points Hi Chris Freeman i made these changes but am i still missing something because i am only getting dash as the output instead of dash-dot-dash-dot-dot-dot. This is my code: ans = [] for char in word: ans.append (morse_dict[char]) return " ".join(ans) Chris FreemanTreehouse Moderator 67,736 Points The return is indented into the for loop. This causes the loop to return at the end of the first iteration. Decrease the indentation of the return statement so the for loop can complete. Danielle Murray4,947 Points How do I indent the return statement properly and what else is wrong with my code? I've been stuck on this task for very long. Chris FreemanTreehouse Moderator 67,736 Points All the code should be inside the morse_code function. def morse_code(word): # ... ans ... for char... ans.append.... return ... remember: - no space before this paren append( - be sure to joinusing the proper character. - not space Danielle Murray4,947 Points Hi Chris Freeman No matter how i indent the return line it gives me back that my return statement is outside of the for loop. I feel like I am making a very simple mistake that I'm missing but I don't understand why. Chris FreemanTreehouse Moderator 67,736 Points Great! Way to stick with it! Feel free to tag me on other questions! Danielle Murray4,947 Points Danielle Murray4,947 Points I've even used the hints but they are of no help neither are the lecture videos.
https://teamtreehouse.com/community/hi-could-you-please-help-me-and-give-me-more-feedback-regarding-my-code-for-this-im-really-struggling-with-this-code
CC-MAIN-2021-49
refinedweb
407
73.17
Introduction Writing Windows services never was an easy thing, and it remains quite rarely explained in contemporary technical programming manuals, compared to other types of Windows applications in spite the fact that the family of Windows NT-based operation systems already has a long history at the moment. A more confusing thing is that this complex but really interesting type of Windows application has not been honored with its own object-oriented library from Microsoft. Of course, ATL service cannot be considered the one, being merely a very specific type of COM server but not a sort of general solution for Windows services. The library proposed here seems like an attempt to change the situation, at least partially, and close the gap a bit. The library, of course, looks very brief and modest, with no doubt. Nevertheless, I hope it will encourage other developers and possibly draw their attention to further improvement.. Articles The articles unveil the details and abilities of SFL usage in sequential manner. They will approach the most complex issues at the very end. - Part I: The minimal application is dissected to let you understand the general structure of an SFL application. This application implements the most basic type of native Windows service . Windows NT-style service. - Part II: The extended service type implementation is described there. This service is the Windows 2000 style. - Part III: The SFL architecture is described in the article, showing the library class hierarchy and giving a description of framework entry points. - Part IV: Additional abilities of service implementation are explained in the final article. It shows how to implement a multi-service application; service clone creation is presented there as well. This part also includes a description for supplementary classes intended for controlling the services. Minimal SFL Application Here's the full C++ code for a Windows NT-style service. Note: The particular service style to be implemented depends on a version of the service control map. The SFL_BEGIN_CONTROL_MAP macro corresponds to the service to be used with Windows NT versions not greater than 4.0. To implement a service of a newer (Windows 2000/XP/2003) style, the SFL_BEGIN_CONTROL_MAP_EX macro must be used (see Part II). Note: To understand the difference between service styles, please refer to MSDN articles, HandlerEx and RegisterServiceCtrlHandlerEx in particular. #include "stdafx.h" #include "SflBase.h" class CMyService: public CServiceBaseT<CMyService, SERVICE_ACCEPT_STOP|SERVICE_ACCEPT_PAUSE_CONTINUE> { SFL_DECLARE_SERVICECLASS_FRIENDS(CMyService) SFL_BEGIN_CONTROL_MAP(CMyService) SFL_HANDLE_CONTROL_STOP() SFL_HANDLE_CONTROL_PAUSE() SFL_HANDLE_CONTROL_CONTINUE() SFL_END_CONTROL_MAP() DWORD OnStop(DWORD& /*dwWin32Err*/, DWORD& /*dwSpecificErr*/, BOOL& bHandled) { bHandled = TRUE; return SERVICE_STOPPED; } DWORD OnPause(DWORD& /*dwWin32Err*/, DWORD& /*dwSpecificErr*/, BOOL& bHandled) { bHandled = TRUE; return SERVICE_PAUSED; } DWORD OnContinue(DWORD& /*dwWin32Err*/, DWORD& /*dwSpecificErr*/, BOOL& bHandled) { bHandled = TRUE; return SERVICE_RUNNING; } }; SFL_BEGIN_SERVICE_MAP(CSimpleServiceApp) SFL_SERVICE_ENTRY2(CMyService, 0, "My Service") SFL_END_SERVICE_MAP() This code has been compiled and linked as a regular Windows console application (please make sure you read and remember that, because it's the key point for SFL code), it gives you the simplest service executable module. Although this service does nothing useful, it's a full Windows service that is able to be run, paused, continued, and stopped (being registered properly, of course, but the issues of service registration and un-registration are beyond this article's scope). It's easy to see that the whole application is made of a couple of classes and three map declarations. In the aspect of usage of class templates and maps, SFL is similar to ATL. Header Files Inclusion The SflBase.h file is mandatory for an SFL application and must either be included explicitly into a compiled C++ unit or implicitly contained by stdafx.h (precompiled header). Please, in any case, make sure the stdafx.h includes windows.h to let your application compile successfully. Service Class Declaration Your service class must be derived from the template class CServiceBaseT<>, which implements the main Windows service logic and corresponding behavior. Your class typically implements only control event handlers, such as OnStop(), OnPause(), and OnContinue() in this sample, and the framework takes care of all other aspects related to interaction between the service and SCM. The core of any SFL-based service class is a service control map. This is the place where you define what control codes will be processed by the service. In fact, the control map in whole implements a service Handler(Ex) function. Note: And based precisely on this latter point, this is the place where you declare the style of service—Windows NT or Windows 2000—to be implemented. This example, as you may remember, implements the basic Windows NT style because the SFL_BEGIN_CONTROL_MAP macro implements an old-style Handler version of a control handler function. The map consists of a number of SFL_HANDLE_CONTROL_XXX macros within the SFL_BEGIN_CONTROL_MAP and SFL_END_CONTROL_MAP macro braces. Of course, you've met this coding style before, in MFC or ATL projects at least. The list of system control codes accepted by your service is defined by the second template parameter passed to the base class. In this sample, it's a combination of SERVICE_ACCEPT_STOP and SERVICE_ACCEPT_PAUSE_CONTINUE. Once you declare that your service accepts the Stop and Pause/Continue controls, the corresponding codes will be passed from SCM to the service for handling. As you may see, the service contains appropriate control entries in its control map. Note: Although this sample does not implement handling for custom control codes, I'll give you a hint on how you handle these: The code in range 128 to 255 must be processed with the entry:SFL_HANDLE_CONTROL(<control code>, <handler name>) The control map binds the control codes to appropriate handlers. Each handler has the same prototype: DWORD (CMyService::*)(DWORD& dwWin32Err, DWORD& dwSpecificErr, BOOL& bHandled); The dwWin32Err parameter is intended to pass a Windows error code back to a framework in case some Windows API critical error happens during control code handling. In case some custom service logic is broken during control code handling, you can pass the error code specific for your service via dwSpecificErr. The dwWin32Err code is ignored by the framework in case the dwSpecificErr is not zero. Note: Remember, the error codes (both Win32 and service-specific) matter only in case of service stopping. Under any other circumstances, they are ignored by the system. The code returned by the handler will be interpreted as the service state code to be set for your service in case the bHandled variable was set to TRUE at the moment of return from the control handling code. Service Map Declaration The service map ties together the service application class and the service class. Here is a simple service application class declaration (that could be found in SflBase.h): namespace SFL { typedef class _TSimpleApp: public CServiceAppT<_TSimpleApp> { } CSimpleServiceApp; } This implies that the application class doesn't require any additional functionality other than the one provided by the base application class template. Note: Of course, your application can implement richer behavior. In this case, you must pass your specific application class name instead of CSimpleServiceApp. The technique of extending of application class will be explained later, in Part IV. To specify your intention to include your service class implementation in the current service application, you place the SFL_SERVICE_ENTRYxxx macro between the SFL_BEGIN_SERVICE_MAP and SFL_END_SERVICE_MAP macro braces. In this particular case, your entry looks like this: SFL_SERVICE_ENTRY2(CMyService, 0, "My Service") As you can understand, the entry states that your service must be identified as "My Service" in the system service list. Well, that's all with the application definition. Now, you can build your project. Some Notes on Service Installation You have built your service application. No doubt, you have a strong desire to make sure it works. But, it is still not registered as a service in your system, and you can see nothing that could help you with this, right? Okay, I dare to be frank with you. This primitive service application does not deserve its own installation code. The great thing is that you never need anything but a special utility from Microsoft, called SC.EXE, to install and delete your service application. The command line doing these actions may look as follows: sc.exe create "My Service" binPath= "<full_app_path>" sc.exe delete "My Service" Warning: Please remember, all command spaces matter! Make sure you placed the actual fully qualified path to your executable instead of the corresponding placeholder. The first command installs the service MyService into your Windows system as a service-own process. The second one removes the previously installed MyService service. What's Next? The next topic, to be presented in Part II, will cover a case of a newer service style adopted since Windows 2000. You'll meet the new (extended) version of the service control map and take a more intense look at the service initialization phase.
http://mobile.codeguru.com/cpp/w-p/system/services/article.php/c14379/SFL-20-Service-Framework-Library-for-Native-Windows-Service-Applications-Part-1.htm
CC-MAIN-2017-43
refinedweb
1,456
53.21
#include <wx/bitmap.h>). Note that all available wxBitmapHandlers for a given wxWidgets port are automatically loaded at startup so you won't need to use wxBitmap::AddHandler./pointers: wxNullBitmap Default constructor. Constructs a bitmap object with no data; an assignment or another member function such as Create() or LoadFile() must be called subsequently. Copy constructor, uses reference counting. To make a real copy, you can use:::Bitmap->newFromBits(bits, width, height, depth).+. This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. Creates a bitmap from XPM data. wxPerl Note: In wxPerl use Wx::Bitmap->newFromXPM(data). Loads a bitmap from a file or resource.. Destructor. See Object Destruction for more info. If the application omits to delete the bitmap explicitly, the bitmap will be destroyed automatically by wxWidgets when the application exits. Adds a handler to the end of the static list of format handlers. Note that unlike wxImage::AddHandler, there's no documented list of the wxBitmapHandlers available in wxWidgets. This is because they are platform-specific and most important, they are all automatically loaded at startup. If you want to be sure that wxBitmap can load a certain type of image, you'd better use wxImage::AddHandler. Deletes all bitmap handlers. This function is called by wxWidgets on exit. Returns disabled (dimmed) version of the bitmap. This method is not available when wxUSE_IMAGE == 0. Creates an image from a platform-dependent bitmap. This preserves mask information so that bitmaps and images can be converted back and forth without loss in that respect. Creates the bitmap from an icon. Creates a fresh bitmap. If the final argument is omitted, the display depth of the screen is used. This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. Create a bitmap compatible with the given DC, inheriting its magnification factor. Create a bitmap with a scale factor, width and height are multiplied with that factor. Finds the handler with the given name. Finds the handler associated with the given extension and type. Finds the handler associated with the given bitmap type. Gets the colour depth of the bitmap. A value of 1 indicates a monochrome bitmap. Returns the static list of bitmap format handlers. Gets the height of the bitmap in pixels. Returns the size of the bitmap in pixels. Returns a sub bitmap of the current one as long as the rect belongs entirely to the bitmap. This function preserves bit depth and mask information. Gets the width of the bitmap in pixels. Adds the standard bitmap format handlers, which, depending on wxWidgets configuration, can be handlers for Windows bitmap, Windows bitmap resource, and XPM. This function is called by wxWidgets on startup. Adds a handler at the start of the static list of format handlers. Returns true if bitmap data is present. Loads a bitmap from a file or resource. Loads a bitmap from the memory containing image data in PNG format. This helper function provides the simplest way to create a wxBitmap from PNG image data. On most platforms, it's simply a wrapper around wxImage loading functions and so requires the PNG image handler to be registered by either calling wxInitAllImageHandlers() which also registers all the other image formats or including the necessary header: and calling in your application startup code. However under OS X this function uses native image loading and so doesn't require wxWidgets PNG support. Finds the handler with the given name, and removes it. The handler is not deleted. Saves a bitmap in the named file. Sets the depth member (does not affect the bitmap data). Sets the height member (does not affect the bitmap data). Sets the width member (does not affect the bitmap data).
https://docs.wxwidgets.org/3.1.3/classwx_bitmap.html
CC-MAIN-2021-43
refinedweb
641
60.01
In this section, we will learn to write methods which can return something. First see an example: class Add{ public static int add_int(int x,int y){ return x+y; } public static void main(String[] args){ int z; z = add_int(2,4); System.out.println(z); } } add_int(int x,int y) - This part of code should be clear that 'add_int' is the name of method and it is taking two parameters of type int. int add_int(int x,int y) - 'int' before the method name means that this method will return an integer. i.e. we will get some integer value whenever we will call this method. return x+y; - This part returns an integer having the value 'x+y' since our method has to return an integer. Now come to the main method int z = add_int(x+y); - We are calling 'add_int' method by passing two integers 2 and 4. And this will return 'x+y' i.e. '2+4'. So, this statement is equivalent to: int z = 2+4; or int z = 6; (6) returned by 'add_int(2,4)'. Now let's see an example: class Area{ public static double getArea(double x,double y){ return x*y; } public static void main(String[] args){ double z = getArea(10.2,23.4); System.out.println(z); } } This method is returning 'double'( x*y )'. And the main method statement is equivalent to: double z = 10.2*23.4; or double z = 238.67999999999998; Calling a method inside another Yes, we can call a method inside another method. We have already done this. We were calling our functions inside the main function. Now look at an example in which there are two user-defined functions. And we will call one inside another. class Div{ public static int div_2(int a){ if(a%2==0){ return 1; } else{ return 0; } } public static void div_6(int b){ if( div_2(b)==1 && b%3 == 0 ){ System.out.print("Yes, the number is divisible by 6.\n"); } else{ System.out.print("No, the number is not divisible by 6.\n"); } } public static void main(String[] args){ div_6(12); div_6(25); } } No, the number is not divisible by 6. A number is divisible by 6 if it is divisible by both 2 and 3. We have a method div_2 which will return 1 if the given number is divisible by 2. Another method that we have defined is dev_6 which calls div_2 inside itself. if( div_2(b)==1 && b%3 == 0 ) - So if div_2 will returns method within the same method.. import java.util.*; class Fact{ public static int factorial( int a ) /* method */ { if( a == 0 || a == 1) { return 1; } else { return a*factorial(a-1); } } public static void main(String[] args) { Scanner s = new Scanner(System.in); int n; System.out.print("Enter number\n"); n = s.nextInt(); int fact = factorial(n); System.out.println("Factorial of "+n+" is "+fact); } } 4 Factorial of 4 is 24 Here, if the integer 'n' is 0 or 1, return 1; returns the factorial value as 1. Otherwise if the integer 'n' is greater than 1, then n*factoria;(n-1) multiplies n with the factorial of 'n-1'. For example, if the number is 4, then 'n*factorial(n-1)' implies '4*factorial(3)'. So by writing 'n*factorial(n-1)', we are again calling the method 'factorial' inside itself but this time the argument is 'n-1' i.e. factorial(n-1). So, first give 0 to the method factorial, it will give us 1. Again give 1, it will return 1 again. On giving 2, the condition of if becomes false and hence the body of else gets executed and thus 'n*factorial(n-1)' i.e. 2*factorial(1) gets returned. This again calls 'factorial' ( as factorial(1) ) which returns 1. Thus, the result is 2*factorial(1) = 2*1 i.e. 2. Now, try this with 3. This time it has to return 3*factorial(2). Again it calls factorial(2) and factorial(2) returns 2*factorial(1). So, the final expression returns 3*2*factorial(1), which is 3*2*1 ( as factorial(1) will return 1 ). For 4, the returning expression is 4*factorial(3). Calling factorial(3) will give us 4*3*factorial(2) ( as factorial(3) will return 3*factorial(2) ). Again calling factorial(2), the final expression will be 4*3*2*factorial(1). Thus, at last, the method will return 4*3*2*1 or 24. Now after this, you need to practice to get your concepts more strong and clearer in programming. Programming is a skill best acquired by practice and example rather than from books. -Alan Turing
https://www.codesdope.com/java-get-values-from-method/
CC-MAIN-2017-43
refinedweb
781
67.15
TWC9: SQL Azure, MVC 2, New Channel 9 Content - Posted: Oct 02, 2009 at 5:48 PM - 52,776. - CodePlex - SQL Azure Explorer project gives you great built-in tools for Visual Studio to visually explore your SQL Azure databases. This is my interesting... When Channel 9 supply those documents or more details about them ??? The ExpandoObject (agree the name is cool, reminiscent of dynamic Javascript objects) is nice for very very simple XML, but it's still not quite suited for XML because you can only use simple names for it and it has no notion of namespaces either. It's not quite suited for the infoset datamodel. Still, for simple grammars that don't use names like xsl:value-of, it's pretty usable. How do you dereference a member in a language that looks like the previous example, or like this (now not only talking about ExpandoObject): Customer."Customer Type". A dictionary will easily support that: Customer["Customer Type"], but I get a compiler error trying to use the indexing operator on an ExpandoObject. Hi TinyA. Could you explain what you mean with "those documents"? Sorry for my mistake, I can find what I want here : Nice day littleguru Maybe a little bit off topic: Why do you produce 16:9 videos when you encode the video in 4:3 format? Most people i know have 16:10 screens today, even if you have an old tube you can watch 16:9 videos - believe it or not Thanks for a great show! Agree, we're switching the player to be 16:9 in the future and the WMV High version is 16:9 native. Our encoding process does letterbox automatically since our current player is a 4:3 ratio. As part of "Revolution 9", we will be changing the player to dynamically resize based on aspect ratio. Oh and thanks for the support!! Good to see that you guys took up my advice and no longer put the laptop on your lap,.. Trouble with the swimmers Brian? LOL! No comment. (written with my laptop... which is on my lap...) Remove this comment Remove this threadclose
http://channel9.msdn.com/Shows/This+Week+On+Channel+9/TWC9-SQL-Azure-MVC-2-New-Channel-9-Content
CC-MAIN-2014-42
refinedweb
358
72.05
Red Hat Bugzilla – Full Text Bug Listing strace-4.5.14-0.EL4.1.x86_64 On x86-64, with a 32-bit binary. 1. gcc -m32 -o writex writex.c 2. sudo strace -o foo -v ./writevx 127.0.0.1 22 [ Process PID=4407 runs in 32 bit mode. ] umovestr: Input/output error umovestr: Input/output error Created attachment 142573 [details] writevx.c Even: #include <sys/uio.h> int main (void) { struct iovec v[2]; v[0].iov_base = "abc"; v[0].iov_len = 3; v[1].iov_base = "\n"; v[1].iov_len = 1; writev (1, v, 2); return 0; } is enough to reproduce it. 64-bit strace just doesn't handle 32-bit {read,write}v. Created attachment 142820 [details] strace-biarch-iovec.patch Patch to fix it. biarch support in current strace is clearly a hack, but until if ever that is rewritten into a framework that can naturally cope with this, I wrote a hack solution just for struct iovec. Fixed upstream. these bugs are fixed upstream in the coming 4.5.15 release This bug should be fixed in strace-4.5.15-1.el4 rp already be fixed in the RHEL4.5 strace version (4.5.15-1.el4.1). Please verify and close. Fixed already some time ago in Rawhide: * Tue Jan 16 2007 Roland McGrath <roland@redhat.com> - 4.5.15-1 - biarch fixes ([...], #218043) therefore it is present in RHEL-4.5 strace-4.5.15-1.el4.1. upstream: 2006-12-10 Dmitry V. Levin <ldv@altlinux.org> Add biarch support for "struct iovec". * defs.h (personality_wordsize): Add. * io.c [HAVE_SYS_UIO_H] (tprint_iov): [LINUX && SUPPORTED_PERSONALITIES > 1] Handle 32-bit personality. * util.c [HAVE_SYS_UIO_H] (dumpiov): [LINUX && SUPPORTED_PERSONALITIES > 1] Likewise. Patch from Jakub Jelinek. Fixes RH#2184.
https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=218043
CC-MAIN-2017-26
refinedweb
291
80.99
Woo hoo! Just got that same thrill. Python + MXS Hi, Again thanks ehulser for sharing the dlx. I’m having the same problem like what cgBartosz has encountered. I’m currently running python 2.5.1. It seems like you can import all the built in modules without any error within max. But when you try to import other library it just return undefined as thou it can’t find the path. However within python there’s no problem importing using: >>> import sys >>> print sys.version 2.5.1 (r251:54863, Apr 18 2007, 08:51:08) [MSC v.1310 32 bit (Intel)] >>> import os >>> os <module ‘os’ from ‘C:\Python25\lib\os.pyc’> Thanks again for sharing :). I’m sure I posted a reply to this a few days ago, oh well… Basically, I’ve not tried this under python, but have been investigating the possibility using Java. This is what I’ve found… Max can talk to “OLE Auutomation Servers” … commonly known as COM servers… So, in order to get this to work, you will need to create your own COM server, register it under windows and, using Max’s “OLE Client” functions, connect and communicate with it… There is some sample code on the install disk, but you will also want to look up “OLE Client” and “Running the OLE Demo” for more info. I hope this helps Shane Ok i have tried with python 2.4.4 and it seems to work fine now without any problem on importing modules. I guess that the current python plugins is not compatible with 2.5.1. With 2.5.1 the sys version will always revert back to the built-in version which is 2.4.3 (#69) instead of the following result that i’m getting from max with 2.4.4. sys = python.import "sys" <module 'sys' (built-in)> sys.version "2.4.4 (#71, Oct 18 2006, 08:34:43) [MSC v.1310 32 bit (Intel)]" os = python.import "os" <module 'os' from 'C:\Python24\Lib\os.py'> Not sure if i’m right on this. Hope it helps on anyone else having the same problem. But at least i get to start working on more python tools. This probably has to do with the plugin being compiled with Python 2.4 libs. Thats the version we’re running and the code was compiled for - do you have both python versions installed? Try flushing the sys.path variable to remap to c:/Python25 instead of 24 which reminds me - I still haven’t mapped setting attributes…will have to get on that… for instance, cant do this: sys = python.import “sys” sys.path = #() – That’ll crash…though it shouldn’t I have both 2.5.1 and 2.4.4 installed. And yup can’t really remap for now as it will just return -- Unknown property: "path" in <module 'sys' (built-in)> I was trying to use MySQLdb module with blurPython in max. But seems like the connection string can’t map correctly yet too. Just to let you know before hand This works in python IDLE import MySQLdb MySQLdb.connect(host="localhost",user="root",passwd="Mypassword",db="MyDB") while this doesn’t in maxscript pysql = python.import "MySQLdb" <module 'MySQLdb' from 'C:\Python24\lib\site-packages\MySQLdb\__init__.pyc'> pysql.connect <function Connect at 0x103E0030> conn = pysql.connect(host="localhost",user="root",passwd="Mypassword",db="MyDB") -- Syntax error: at ),, expected <factor> -- In line: conn = pysql.connect(host="localhost",u Not sure if i’m doing something wrong thou. Yeah MySQLdb was the big reason I wanted python integration more seamless without using ODBC and OLE. Hi Guys, Having had just returned from holidays this is awesome, I definitely have to shout Eric some beers! awesome work! I have only had a quick chance to play with it so far but it looks very impressive… A quick question, what is the likely hood of a 64bit build? Again thanks to Eric and blur for sharing this with us for those of us who have large and complex python based pipelines this will be fantastic to better integrate max… Thanks to Paul for hosting as well! Cheers Dave Teirz: Keep in mind you are actually running MAXScript code that wraps Python code, so it still goes through the MAXScript interpreter. In MAXScript, you can’t call a function this way: pysql = python.import "MySQLdb" pysql.connect( host = "localhost", user = "root", passwd = "Mypassword", db = "MyDB" ) It has to be in proper maxscript syntax: pysql = python.import "MySQLdb" pysql.connect host:"localhost" user:"root" passwd:"Mypassword" db:"Mydb" right now I don’t have it mapping optional variables though, so for right now you cant pass in optional variables like that, this should work for right now though: pysql = python.import "MySQLdb" pysql.connect "localhost" "root" "Mypassword" "Mydb" Important thing though, and I know this is kinda confusing, you still have to follow MAXScript syntax rules even though its executing Python methods. specxor: Hope it helps, we have a Python based pipe and are hoping this’ll help tie it all together too. I’ll include a x64 build in the next release, also a max8 release. We’ve needed both here also. So, to recap: Known issues: No setattr implemented No optional method variables implemented Anything else? Thanks Eric again for clearing my confusion. It all works perfectly now, just like a turbo button for max scripting. Connecting to database is so much cleaner and totally no problem getting the cursor to work. I will try to feedback if there’s any other issue as i am still working on the python based pipe for my studio. By the way, i’m using this plugin with Max 2008. So we can probably assume that this plugin is ready for max 2008 as well. And the best thing is that at least i won’t have to worry the complains from any Maya TD that they can’t use python as easy in max now. I would love to hear a brief description on what you are using Python for in your pipelines and what you are doing with it. I’m slowely writting the spec for a pipeline and wonder where others have gone before me. This plugin should work for max9+ until they switch compilers like from max8 to max9. (theoretically) Though the plugin itself is simple and should easily translate to future versions. I’m hoping this should bridge a large gap in 3dsMax integration as a lot of studio’s I know of use Python in their pipeline. We started using Python when we added XSI into our pipeline as it was the most powerful of the scripting languages available for XSI, and have since expanded much of our pipe to using Python as the core. Since we have to support 3dsMax and XSI, we keep as much information that we can share and keep in a single library as we can, so naming conventions, path templates, XML formatting, and eventually connections into production tracking and asset management would run as separate python modules that we could now directly incorporate into 3dsMax. Before, we had to duplicate the functionality through a DLL or MAXScript library and it became a pain to manage. Mostly, I’m just lazy. We’re porting a tool to help with lip syncing from maxScript to python. Our own version of papagayo that sits just below the time line, if anyone’s familiar with that program. Python is siginifigantly faster at parsing the huge dictionary files and so far, wxPython is much faster than maxScript at drawing our (probably too complicated) interface. If it goes well, we’ll be porting all of our tools to python. We are still developing it to replace our old pipe for TV series. As my current main focus is TV series which normally has a very tight schedule, e.g. 1-4 episode per month depends on the budget and animation length, by having a proper customize asset management and production tracking system is very important to us. And sometimes we need to swap between maya and max, so it’s more practical to share the data/scripts and any other data e.g. selections sets, enforced naming convention etc to keep the consistantcy without the need to write some scripts twice. Even the XML formatting is extremely useful with the new version of FCP. It’s quite possible that i will be porting all the in house tools to python so those who are more familiar with maya can just duplicate the same function without wasting much development time. Another thing that i’m working on with python is the asset tracking. Whatever max9/2008 having now is still not as friendly for production. For example, there’s a lot of time i need to know this asset maxfile have merged in which model, with which material library loaded, with which rig, and which lighting master file, and done by which guy. I don’t like the idea of xref as there’s too much limitation, and thanks to PEN (yup here must credit you for your batch processing scripts :)). Keeping track of these without a proper system would really drive someone crazy. The last thing i wanna hear from my EP is “Which idiot has name this file as EpXX_ScXX_ShXXX_final_final which is not FINAL! Where’s the freaking latest file!” Hi Paul, Here is a quick description of what we are currently using python for in our pipeline…. We (I) have also been developing a replacement to back burner that will support multiple packages, generate previews and a bunch of other things that are pretty cool ;). Cheers Dave. Hi Dave, It seems like everyone is going towards the same direction. And i believe that the database you are running at had a gigantic and complex setup, at least while reference to what i’m still working on now, cause you have a much complicated pipe there. I’m interested on how you handle the artist on file managing. Does your artist create all the asset files thru your system, meaning they are totally isolate from e.g. window explorer to create files, or even the default save button in any application that’s involved in the pipe. Hi Teirz, It does seem like everyone or allot of studios are going down the python route… With assets, we have a mix between our software handling files but still allowing artists to be able to get at files manually should they need it. We have replaced all of the standard save, save selected, export etc… with our own tools that takes over from the user and creates entries in the database, all files are saved in there correct location based on the type of asset they are, for instance if it was a character or prop etc… The software handles all of this, we found its much easier if you keep a logical file structure so if artists need to for what ever reason they can still find what they are looking for with out to much effort. Hope that helped… Cheers Dave That is great info, thanks. I have done the same but I have done it with MXS and XML only at this point. Since I do most of my development for smaller companies these days I don’t always get the time to develop larger tools and systems. I’m looking at creating a flexible tracking system that I can impliment with clients as they need it instead of just reinventing the wheel each time.
http://forums.cgsociety.org/t/python-mxs/1066622?page=7
CC-MAIN-2019-22
refinedweb
1,944
70.43
This article has been excerpted from book "The Complete Visual C# Programmer's Guide" from the Authors of C# Corner.BufferedStream Class The BufferedStream class also extends the Stream class. Buffers, or cached blocks of data in memory, provide speed and stability to the process of reading or writing because they prevent numerous calls to the operating system. Buffered streams are used in conjunction with other streams to provide better read/write performance. The BufferedStream class can be used to either read data or write data but it cannot be used to perform both read and write operations together. The class has been optimized so that it maintains a suitable buffer at all times. When a buffer is not required, instead of slowing down the process, the class does not allocate any space in memory. File streams are already buffered and therefore a buffered stream is generally used to buffer network streams used in networking applications. MemoryStream Class A memory stream is created from an array of unsigned bytes rather than from a file or other stream. Memory streams are used as temporary, in-memory storage (temporary buffers) in lieu of creating temporary files. This stream is highly optimized for speed since the data is stored in memory and the processor can easily access it. Memory streams should be used to store frequently accessed data. The Read and Write methods of the MemoryStream class read and write from an internal buffer that is created when the memory stream is created. The example shown in Listing 6.9 uses the MemoryStream class to add a custom signature at the end of the specified file. Listing 6.9: MemoryStream Reading and Writing Example using System;using System.IO;using System.Text;public class MemStream{ public static void Main(string[] args) { //Check the number or arguments if (args.Length < 1) { Console.WriteLine("Usage: MemStream <sourcefile>"); return; } try { //Get the current date DateTime dt = DateTime.Now; string tag = "This file was signed on " + dt.ToShortDateString(); //Get a byte array from the string byte[] tagarray = System.Text.Encoding.ASCII.GetBytes( tag.ToCharArray()); //Construct a memory stream with the byte //array as a parameter MemoryStream mstream = new MemoryStream(tagarray); //Open a FileStream on the source file FileStream fout = new FileStream(args[0], FileMode.Open, FileAccess.Write); //Seek to the end of the file fout.Seek(0, SeekOrigin.End); Byte[] buffer = new Byte[tagarray.Length]; Console.WriteLine("Starting to write signature"); //Read the contents of the MemoryStream into a buffer int n = mstream.Read(buffer, 0, buffer.Length); //Write the buffer to the file fout.Write(buffer, 0, n); //Close the streams mstream.Close(); fout.Close(); Console.WriteLine("Signature Written"); } catch (IOException e) { Console.WriteLine("An IO Exception Occurred :" + e); } catch (Exception oe) { Console.WriteLine("An Exception Occurred :" + oe); } Console.ReadLine(); }}In this example, a MemoryStream object is created and a byte array–containing signature is stored in the memory stream's buffer. Then a file stream is opened on the source file and the Seek method is used to seek to the end of the file. Once positioned at the end of the file, the code gets the contents of the memory stream and writes the contents to the file stream. Character Streams Character streams treat data as a stream of characters. These streams are most useful if they contain data in a format readable by humans. ConclusionHope this article would have helped you in understanding BufferedStream, MemoryStream and CharacterStream Class in C#. See other articles on the website on .NET and C#. Working as a Software professional. ©2015 C# Corner. All contents are copyright of their authors.
http://www.c-sharpcorner.com/uploadfile/puranindia/bufferedstream-memorystream-and-characterstream-class-in-C-Sharp/
CC-MAIN-2015-11
refinedweb
602
58.18
If it: - Looks like it took 10 seconds to write. - Doesn't elicit a laugh from you. - Is not something you can picture someone laughing at. - Does not make any intelligible sense, and is not funny being unintelligent. - Is not contributing to ease of management of the Wiki as a whole (this includes pages such as this page itself, the VFD page, pages in the Special: namespace etc) - Exists primarily or entirely to link to an outside website. It may be a candidate for VFD. Write {{vfd}} at an easily-visible spot on the page and add the entry to Uncyclopedia:Pages for deletion. If a page is just vandalism, plagiarism, etc., you can add it to QuickVFD and it will go away faster.
http://uncyclopedia.wikia.com/wiki/Uncyclopedia:Deletion_Policy?diff=prev&oldid=266857
CC-MAIN-2015-32
refinedweb
124
74.79
Открыть конфигурационный файл #include <cfgopen.h>int cfgopen( const char *path,unsigned flags,const char *historical,char *namebuf,int nblen ); NULLto ignore this option. NULLto ignore this option. 0to ignore this option. libc: You can also include any combination of these bits in the value of flag: 0644, the file's user ID is set to the effective user ID of the process, and the group ID is set to the effective group ID of the process or the group ID of the file's parent directory (see chmod()). CFGFILE_EXCLand CFGFILE_CREATare set, and the file exists, cfgopen() fails. The check for the existence of the file and the creation of the file if it doesn't exist is atomic with respect to other processes attempting the same operation with the same filename. Specifying CFGFILE_EXCLwithout CFGFILE_CREAThas no effect. CFGFILE_WRONLYor CFGFILE_RDWR, the file length is truncated to zero and the mode and owner are left unchanged. CFGFILE_TRUNC has no effect on FIFO or block or character special files or directories. Using CFGFILE_TRUNCwith CFGFILE_RDONLYhas where node_name is the value you get by calling confstr() for CS_HOSTNAME.. A valid file descriptor if CFGFILE_NOFD isn't specified, a nonnegative value if CFGFILE_NOFD is specified, or -1 if an error occurs. ЗОСРВ «Нейтрино» confstr(), fcfgopen(), open()
https://help.kpda.ru/help/topic/ru.kpda.doc.os_ru/html/libraries/libc/c/cfgopen.html
CC-MAIN-2022-21
refinedweb
210
55.84
Problem Automatically Downloading Files With Wget You can see them, if you go to their web page. Each one is of the form: "NNN" is a unique level. Now if I do (say): % wget the download succeeds, but the file is saved with a name of "download.php?id=852". If I do: % wget --spider --debug" I can see from the debug messages, that the actual filename I want it saved as is in the "Content-Disposition:" line: DEBUG output created by Wget 1.10.2 on linux-gnu. --11:55:57-- => `download.php?id=852.1' Resolving grubbygames.com... 69.20.54.231 Caching grubbygames.com => 69.20.54.231 Connecting to grubbygames.com|69.20.54.231|:80... connected. Created socket 3. Releasing 0x00000000005576a0 (new refcount 1). ---request begin--- HEAD /pf_levels/download.php?id=852 HTTP/1.0 User-Agent: Wget/1.10.2 Accept: */* Host: grubbygames.com Connection: Keep-Alive ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 200 OK Date: Sun, 25 Feb 2007 20:00:08 GMT Server: Apache/2.0.46 (Red Hat) Accept-Ranges: bytes X-Powered-By: PHP/4.3.2 Pragma: public Cache-Control: must-revalidate, post-check=0, pre-check=0 Content-Disposition: attachment; filename="duncan1.lvl" Connection: close Content-Type: lvl ---response end--- 200 OK Length: unspecified [lvl] Closed fd 3 200 OK Are their any command options that I can give to wget, to get it to save into the filename given on the "Content-Disposition:" line? I'm using wget 1.10.2. Update: Thanks to everybody who commented. As I wanted to write a Python script to grab all the user supplied custom levels within a given range, the solution by Stephen English was just perfect. Here's my complete script using a slight variation of his code: #!/usr/bin/env python # # Script to automatically download a range of Professor Fizzwizzle custom # levels from the Grubbygames website. # import urllib2 import sys startLevel = 1 endLevel = 853 baseUrl = "" def main(): for i in range(startLevel, endLevel+1): url = baseUrl + str(i) print "url: `%s`" % url level = urllib2.urlopen(url) f = open(level.headers["Content-Disposition"].split("\"")[1], "w") f.write(level.read()) f.close() if __name__ == "__main__": main() [Technorati Tag: File Downloads] ( Feb 25 2007, 12:42:12 PM PST ) [Listen] Permalink Comments [5] Posted by George Skuse on February 25, 2007 at 01:37 PM PST # Posted by no name on February 25, 2007 at 01:52 PM PST # import urllib2 import sys game = urllib2.urlopen(sys.argv[1]) f = open(game.headers["Content-Disposition"].split("\"")[1], "w") f.write(game.read()) f.close() stephen@sam:/tmp$ python moo.py Posted by Stephen English on February 25, 2007 at 01:53 PM PST # Posted by MaoP on February 25, 2007 at 02:36 PM PST # There's always more than one way to skin a cat. I went with a variation of the solution by Stephen English, and updated the blog post accordingly. Posted by Rich Burridge on February 25, 2007 at 03:34 PM PST #
http://blogs.sun.com/richb/entry/problem_automatically_downloading_files_with
crawl-002
refinedweb
513
67.86
#include <numaif.h> int mbind(void *addr, unsigned long len, int mode, unsigned long *nodemask, unsigned long maxnode, unsigned flags); Link with -lnuma.() only has an effect or MPOL_PREFERRED. All policy modes except MPOL_DEFAULT require the caller to specify via the nodemask argument, the node or nodes to which the mode applies. The mode argument may also include an optional mode flag . The supported mode flags are: nodemask points to a bitmask of nodes containing. The MPOL_DEFAULT mode requests that any nondefault policy be removed, restoring default behavior. When applied to a range of memory via mbind(), this means to use the process policy, which may have been set with set_mempolicy(2). If the mode of the process. The MPOL_BIND mode specifies. The MPOL_INTERLEAVEMB or bigger with a fairly uniform access pattern. Accesses to a single page of the area will still be limited to the memory bandwidth of a single node. MPOL_PREFERRED. This is the only way to specify "local allocation" for a range of memory via mbind(). If MPOL_MF_STRICT is passed in flags and policy is not MPOL_DEFAULT, then the call will fail nondefault policy. To select explicit "local allocation" for a memory range, specify a mode only available on Linux 2.6.16 and later.
http://www.makelinux.net/man/2/M/mbind
CC-MAIN-2014-41
refinedweb
208
66.23
Hello once again, I'm trying to write a program that reads input until a '@' is typed in...also, I want to echo the input only without digits, and also convert any uppercase letters to lowercase and vice versa...I have this so far but I'm getting a couple of errors and I'm curious why....any help would be great. Thanks-Chap Code:#include <iostream> #include <cctype> using namespace std; int main() { char ch; cin.get(ch); while (ch != '@') { cout << ch; if (isdigit(ch)) cout << " "; else if (tolower(ch)) cout << ch; else (toupper(ch)) cout << ch; cin.get(ch); } cout << "Done\n"; cin.get(); cin.get(); return 0; }
http://cboard.cprogramming.com/cplusplus-programming/56747-help-program.html
CC-MAIN-2014-42
refinedweb
110
78.04
This command gets the list of attributes on a node or node type and creates nodes of type TdnAffect, one for each attribute, that are connected iff the source node’s attribute affects the destination node’s attribute. In query mode, return type is based on queried flag. Derived from mel command maya.cmds.affectedNet Example: import pymel.core as pm # Create a network of this transform node's attributes that affect # each other pm.affectedNet( 'transform1' ) # Create a network all of the transform shared attributes that affect # each other pm.affectedNet( t='transform' ) # Create a network of the revolve and shape node type attributes that # affect each other pm.affectedNet( t='revolve', t='shape' )
http://www.luma-pictures.com/tools/pymel/docs/1.0/generated/functions/pymel.core.general/pymel.core.general.affectedNet.html#pymel.core.general.affectedNet
crawl-003
refinedweb
115
58.38
The author selected the Open Internet/Free Speech Fund to receive a donation as part of the Write for DOnations program. Introduction It is nearly impossible to build websites that work perfectly the first time without errors. For that reason, you need to test your web application to find these errors and work on them proactively. In order to improve the efficiency of tests, it is common to break down testing into units that test specific functionalities of the web application. This practice is called unit testing. It makes it easier to detect errors because the tests focus on small parts (units) of your project independently from other parts. Testing a website can be a complex task to undertake because it is made up of several layers of logic like handling HTTP requests, form validation, and rendering templates. However Django provides a set of tools that makes testing your web application seamless.. You will run these tests, analyze their results, and learn how to find the causes of failing tests. Prerequisites Before beginning this tutorial, you’ll need the following: Step 1 — Adding a Test Suite to Your Django Application A test suite in Django is a collection of all the test cases in all the apps in your project. To make it possible for the Django testing utility to discover the test cases you have, you write the test cases in scripts whose names begin with test. In this step, you’ll create the directory structure and files for your test suite, and create an empty test case in it. If you followed the Django Development tutorial series, you’ll have a Django app called blogsite. Let’s create a folder to hold all our testing scripts. First, activate the virtual environment: - cd ~/my_blog_app - . env/bin/activate Then navigate to the blogsite app directory, the folder that contains the models.py and views.py files, and then create a new folder called tests: - cd ~/my_blog_app/blog/blogsite - mkdir tests Next, you’ll turn this folder into a Python package, so add an __init__.py file: - cd ~/my_blog_app/blog/blogsite/tests - touch __init__.py You’ll now add a file for testing your models and another for testing your views: - touch test_models.py - touch test_views.py Finally, you will create an empty test case in test_models.py. You will need to import the Django TestCase class and make it a super class of your own test case class. Later on, you will add methods to this test case to test the logic in your models. Open the file test_models.py: Now add the following code to the file: ~/my_blog_app/blog/blogsite/tests/test_models.py from django.test import TestCase class ModelsTestCase(TestCase): pass You’ve now successfully added a test suite to the blogsite app. Next, you will fill out the details of the empty model test case you created here. Step 2 — Testing Your Python Code In this step, you will test the logic of the code written in the models.py file. In particular, you will be testing the save method of the Post model to ensure it creates the correct slug of a post’s title when called. Let’s begin by looking at the code you already have in your models.py file for the save method of the Post model: - cd ~/my_blog_app/blog/blogsite - nano models.py You’ll see the following: ~/my_blog_app/blog/blogsite/models.py class Post(models.Model): ... def save(self, *args, **kwargs): if not self.slug: self.slug = slugify(self.title) super(Post, self).save(*args, **kwargs) ... We can see that it checks whether the post about to be saved has a slug value, and if not, calls slugify to create a slug value for it. This is the type of logic you might want to test to ensure that slugs are actually created when saving a post. Close the file. To test this, go back to test_models.py: Then update it to the following, adding in the highlighted portions: ~/my_blog_app/blog/blogsite/tests/test_models.py from django.test import TestCase from django.template.defaultfilters import slugify from blogsite.models import Post class ModelsTestCase(TestCase): def test_post_has_slug(self): """Posts are given slugs correctly when saving""" post = Post.objects.create(title="My first post") post.author = "John Doe" post.save() self.assertEqual(post.slug, slugify(post.title)) This new method test_post_has_slug creates a new post with the title "My first post" and then gives the post an author and saves the post. After this, using the assertEqual method from the Python unittest module, it checks whether the slug for the post is correct. The assertEqual method checks whether the two arguments passed to it are equal as determined by the "==" operator and raises an error if they are not. Save and exit test_models.py. This is an example of what can be tested. The more logic you add to your project, the more there is to test. If you add more logic to the save method or create new methods for the Post model, you would want to add more tests here. You can add them to the test_post_has_slug method or create new test methods, but their names must begin with test. You have successfully created a test case for the Post model where you asserted that slugs are correctly created after saving. In the next step, you will write a test case to test views. Step 3 — Using Django’s Test Client In this step, you will write a test case that tests a view using the Django test client. The test client is a Python class that acts as a dummy web browser, allowing you to test your views and interact with your Django application the same way a user would. You can access the test client by referring to self.client in your test methods. For example, let us create a test case in test_views.py. First, open the test_views.py file: Then add the following: ~, 200) The ViewsTestCase contains a test_index_loads_properly method that uses the Django test client to visit the index page of the website (, where your_server_ip is the IP address of the server you are using). Then the test method checks whether the response has a status code of 200, which means the page responded without any errors. As a result you can be sure that when the user visits, it will respond without errors too. Apart from the status code, you can read about other properties of the test client response you can test in the Django Documentation Testing Responses page. In this step, you created a test case for testing that the view rendering the index page works without errors. There are now two test cases in your test suite. In the next step you will run them to see their results. Step 4 — Running Your Tests Now that you have finished building a suite of tests for the project, it is time to execute these tests and see their results. To run the tests, navigate to the blog folder (containing the application’s manage.py file): Then run them with: You’ll see output similar to the following in your terminal: OutputCreating test database for alias 'default'... System check identified no issues (0 silenced). .. ---------------------------------------------------------------------- Ran 2 tests in 0.007s OK Destroying test database for alias 'default'... In this output, there are two dots .., each of which represents a passed test case. Now you’ll modify test_views.py to trigger a failing test. First open the file with: Then change the highlighted code to: ~) Here you have changed the status code from 200 to 404. Now run the test again from your directory with manage.py: You’ll see the following output: OutputCreating test database for alias 'default'... System check identified no issues (0 silenced). .F ====================================================================== FAIL: test_index_loads_properly (blogsite.tests.test_views.ViewsTestCase) The index page loads properly ---------------------------------------------------------------------- Traceback (most recent call last): File "~/my_blog_app/blog/blogsite/tests/test_views.py", line 8, in test_index_loads_properly self.assertEqual(response.status_code, 404) AssertionError: 200 != 404 ---------------------------------------------------------------------- Ran 2 tests in 0.007s FAILED (failures=1) Destroying test database for alias 'default'... You see that there is a descriptive failure message that tells you the script, test case, and method that failed. It also tells you the cause of the failure, the status code not being equal to 404 in this case, with the message AssertionError: 200 != 404. The AssertionError here is raised at the highlighted line of code in the test_views.py file: ~) It tells you that the assertion is false, that is, the response status code ( 200) is not what was expected ( 404). Preceding the failure message, you can see that the two dots .. have now changed to .F, which tells you that the first test case passed while the second didn’t. Conclusion In this tutorial, you created a test suite in your Django project, added test cases to test model and view logic, learned how to run tests, and analyzed the test output. As a next step, you can create new test scripts for Python code not in models.py and views.py. Following are some articles that may prove helpful when building and testing websites with Django: You can also check out our Django topic page for further tutorials and projects.
https://www.xpresservers.com/tag/unit/
CC-MAIN-2020-24
refinedweb
1,547
74.08
std::cerr, std::wcerr The global objects std::cerr and std::wcerr control output to a stream buffer of implementation-defined type (derived from std::streambuf and std::wstreambuf, respectively), associated with the standard C error output stream stderr. These objects are guaranteed to be initialized during or before the first time an object of type std::ios_base::Init is constructed and are available for use in the constructors and destructors of static objects with ordered initialization (as long as <iostream> is included before the object is defined). Unless sync_with_stdio(false) has been issued, it is safe to concurrently access these objects from multiple threads for both formatted and unformatted output. Once initialized, (std::cerr.flags() & unitbuf) != 0 (same for wcerr) meaning that any output sent to these stream objects is immediately flushed to the OS (via std::basic_ostream::sentry's destructor). In addition, std::cerr.tie() returns &std::cout (same for wcerr and wcout), meaning that any output operation on std::cerr first executes std::cout.flush() (via std::basic_ostream::sentry's constructor) (since C++11) [edit] Notes The 'c' in the name refers to "character" (stroustrup.com FAQ); cerr means "character error (stream)" and wcerr means "wide character error (stream)" [edit] Example output to stderr via cerr flushes out the pending output on cout, while output to stderr via clog does not #include <thread> #include <iostream> #include <chrono> void f() { std::cout << "Output from thread..."; std::this_thread::sleep_for(std::chrono::seconds(2)); std::cout << "...thread calls flush()" << std::endl; } int main() { std::thread t1(f); std::this_thread::sleep_for(std::chrono::seconds(1)); std::clog << "This output from main is not tie()'d to cout\n"; std::cerr << "This output is tie()'d to cout\n"; t1.join(); } Output: This output from main is not tie()'d to cout Output from thread...This output is tie()'d to cout ...thread calls flush()
https://en.cppreference.com/w/cpp/io/cerr
CC-MAIN-2020-50
refinedweb
313
51.18
#include <IpJournalist.hpp> Inheritance diagram for Ipopt::Journalist: This class is responsible for all messaging and output. The "printing" code or "author" should send ALL messages to the Journalist, indicating an appropriate category and print level. The journalist then decides, based on reader specified acceptance criteria, which message is actually printed in which journals. This allows the printing code to send everything, while the "reader" can decide what they really want to see. Authors: Authors use the Journals: You can add as many Journals as you like to the Journalist with the AddJournal or the AddFileJournal methods. Each one represents a different printing location (or file). Then, you can call the "print" methods of the Journalist to output information to each of the journals. Acceptance Criteria: Each print message should be flagged appropriately with an EJournalCategory and EJournalLevel. The AddFileJournal method returns a pointer to the newly created Journal object (if successful) so you can set Acceptance criteria for that particular location. Definition at line 124 of file IpJournalist.hpp. Constructor. Destructor. .. Copy Constructor. Method to print a formatted string. Referenced by Ipopt::IpoptException::ReportException(). Method to print a long string including indentation. The string is printed starting at the current position. If the position (counting started at the current position) exceeds max_length, a new line is inserted, and indent_spaces many spaces are printed before the string is continued. This is for example used during the printing of the option documentation. Method to print a formatted string with indentation. Method to print a formatted string using the va_list argument. Method to print a formatted string with indentation, using the va_list argument. Method that flushes the current buffer for all Journalists. Calling this method after one optimization run helps to avoid cluttering output with that produced by other parts of the program (e.g. written in Fortran) Add a new FileJournal. fname is the name of the * file to which this Journal corresponds. Use fname="stdout" * for stdout, and use fname="stderr" for stderr. This method * returns the Journal pointer so you can set specific acceptance criteria. It returns NULL if there was a problem creating a new Journal. Get an existing journal. You can use this method to change the acceptance criteria at runtime. Overloaded Equals Operator. Definition at line 251 of file IpJournalist.hpp.
http://www.coin-or.org/Doxygen/CoinAll/class_ipopt_1_1_journalist.html
crawl-003
refinedweb
385
50.33
Down the road, I was going to test out I could use this pattern to create a ProductBuilder (such as ALTA.BLK.LARGE.US) that I could use at my workplace. Much further down the road, I was going to experiment with creating a TestBuilder, where I could set different test parameters, such as placing a standard shopping cart order with multiple products, set the shipping level, and payment types. ... For now, though, I'll stick with attempting to create an AddressBuilder. Sidenote: Fitbit-Boston is hiring! And, yes, if you are a Senior Software Developer in the Boston area, Fitbit is hiring like crazy with positions on our eCommerce team, on the Firmware team that builds software for our devices, on our automation tool development team, and on our IOS development team... mention where you saw this link when you apply! The Gang of Four's Builder Pattern The Builder pattern was first introduced in Design Patterns: Elements of Reusable Object-Oriented Software (1994), written by the "Gang of Four": @EricGamma, Richard Helm, Ralph Johnson and John Vlissides. The purpose of this pattern was to "Separate the construction of a complex object from its representation so that the same construction process can create different representations". There are four parts according to the book: Builder: - "[...] specifies an abstract interface for creating parts of a Product object". ConcreteBuilder - "[...] constructs and assembles parts of the product by implementing the Builder interface". - "[...] defines and keeps track of the representation it creates". - "[...] provides an interface for retrieving the product" Director - "[...] constructs an object using the Builder interface". Product - "[...] represents the complex object under construction. ConcreteBuilder builds the product's internal representation and defines the process by which it's assembled". - "[...] includes classes that define the constituent parts, including interfaces for assembling the parts into the final result". Effective Java's Builder Pattern The Builder Pattern was re-introduced in Joshua Bloch's ( @JoshBloch ) guide, Effective Java (2001). From his blog post, Creating and Destroying Java Objects: ." ". Joshua also mentions that another benefit is that the class is now immutable: Its state, the value of its variables, they won't change after it is created. It's pretty set in concrete. Proceed With Caution, Though Joshua does urge caution: "The Builder pattern does have disadvantages of its own": - Performance Cost when Creating Object: "In order to create an object, you must first create its builder. While the cost of creating the builder is unlikely to be noticeable in practice, it could be a problem in some performance-critical situations". - Only If Parameters >= 4: "[T]he Builder pattern is more verbose than the telescoping constructor pattern, so it should be used only if there are enough parameters, say, four or more". Drafting the Address Consider a small sample of the types of addresses you can have: - An address with Address Name, Address, City and State, such as: Fitbit-Boston, One Marina Park Drive, Boston, MA , suitable for directions in Google Maps. - A mailing address, which can add a second line to the address, and includes a zip code, such as Suite 701, and 02210. - An address which also adds the country, such as United States. If we were to create this, we would have to use three different constructors. Instead, we will create an Address Builder so that we can assemble the address like so: - Address.Builder("Fitbit-Boston", "One Marina Park Drive", "Boston", "MA").build(); - Address.Builder("Fitbit-Boston", "One Marina Park Drive", "Boston", "MA") .setAddress2("Suite 701").setZip("02210").build(); - Address.Builder("Fitbit-Boston", "One Marina Park Drive", "Boston", "MA") .setAddress2("Suite 701").setZip("02210").setCountry("United States").build(); Writing the Code First, let's set up the parameters in the class. The Address object, once constructed from the class, will be unchanging. We will use the keyword final when we declare the variable, making it a constant. Address.java public class Address { private final String addressName; private final String address1; private final String address2; private final String city; private final String state; private final String zip; private final String country; Next, we are going to set up the ConcreteBuilder type, making this internal class static. According to our Address example above, there are two types of fields: - Required, such as Address Name, Address Line 1, City, State - Optional, such as Address Line 2, Zip Code, and Country The optional ones we will save as Strings, but with the required fields, such as Address Name, we will also the keyword final. Builder public static class Builder { // Required Parameters private final String name; private final String address1; private final String city; private final String state; // Optional Parameters private String address2; private String zip; private String country; } Once we figure out which fields are always required, we can create a constructor for the Builder class, to feed the values inputted into the Address' private fields. We do that with the this keyword. public Builder(String name, String address1, String city, String state) { this.name = name; this.address1 = address1; this.city = city; this.state = state; } Next, we want a way to set up the optional parameters: public Builder setAddress2(String value) { address2 = value; return this; } public Builder setCountry(String value) { country = value; return this; } public Builder setZip(String value) { zip = value; return this; } Let's create a method called build to build the object: public Address build() { return new Address(this); } } Now, let's create a constructor for Address, and make it private, accessible only to the Builder itself. private Address(Builder builder) { this.addressName = builder.name; this.address1 = builder.address1; this.city = builder.city; this.state = builder.state; this.address2 = builder.address2; this.country = builder.country; this.zip = builder.zip; }... And there you have it! Our Address and Address Builder is all constructed! Crafting a Test Now that we created everything... how can we turn this into a test? Let's write a method print out the address we assembled. To keep things neat, let's not print out any blank lines. If there isn't an Address2, or a Zip code, let's not print it out. And, not to get all "meta", but let's use a Builder pattern called StringBuilder to assemble all the pieces of the address. We can create a StringBuilder object, call it "output" and append the values of the String components to the StringBuilder. private StringBuilder appendIfNotBlank( StringBuilder output, String value ){ if (value != null ){ output.append(value).append("\n"); } return output; }... This method will save us a lot of code duplication, following the software design principle of Don't Repeat Yourself (DRY). Let's override the method we use to print, the toString() method added to every single Java object. @Override public String toString(){ StringBuilder output = new StringBuilder(); output.append(this.addressName).append("\n") .append(this.address1 ).append("\n"); appendIfNotBlank(output, address2); output.append(this.city).append(", ").append(this.state).append(" "); appendIfNotBlank(output, this.zip); appendIfNotBlank(output, this.country); output.append("\n"); return output.toString(); } Last, but not least, let's create a test to: - Assemble the new addresses we are creating - Store them in variables such as fitbitBostonAddress and fitbitBostonMailingAddressWithCountry. - Print out the new addresses and see what happens! @Test public void test_printUnitedStatesAddresses(){ Address fitbitBostonAddress = new Address.Builder("Fitbit-Boston", "One Marina Park Drive", "Boston", "MA").build(); Address fitbitBostonMailingAddress = new Address.Builder("Fitbit-Boston", "One Marina Park Drive", "Boston", "MA") .setAddress2("Suite 701").setZip("02210").build(); Address fitbitBostonMailingAddressWithCountry = new Address.Builder("Fitbit-Boston", "One Marina Park Drive", "Boston", "MA") .setAddress2("Suite 701").setZip("02210").setCountry("United States").build(); System.out.println("Sample Address Formats:\n"); System.out.println(fitbitBostonAddress); System.out.println(fitbitBostonMailingAddress); System.out.println(fitbitBostonMailingAddressWithCountry); } The Moment of Truth When we run the test, we get the following output printed out: [TestNG] Running: C:\Users\tmaher\.IdeaIC15\system\temp-testng-customsuite.xml Sample Address Formats: Fitbit-Boston One Marina Park Drive Boston, MA Fitbit-Boston One Marina Park Drive Suite 701 Boston, MA 02210 Fitbit-Boston One Marina Park Drive Suite 701 Boston, MA 02210 United States =============================================== Default Suite Total tests run: 1, Failures: 0, Skips: 0 =============================================== Looking for sourcecode you can play around with? You can view it on my GitHub site under BuilderPattern_Java. Examining Builder Patterns: - Part One: Anti-Pattern: Telescoping Constructors - Part Two: Investigating Address Builder - Source Code: GitHub, T.J. Maher That's it for now! Happy Testing! -T.J. Maher Sr. QA Engineer, Fitbit-Boston // QA Engineer since Aug. 1996 // Automation developer for [ 1 ] year and still counting!
http://www.tjmaher.com/2016/05/addressbuilder-builder-pattern-example.html
CC-MAIN-2019-51
refinedweb
1,407
55.54
git-cohttp-mirage Support for on-disk and in-memory Git stores. Can read and write all the Git objects: blobs, trees, commits and tags. It can also handle pack files, pack indexes and index files (where the staging area lives - only for git-unix package). All the objects share a consistent API, and convenience functions are provided to manipulate the different objects. For instance, it is possible to make a pack file position independent (as the Zlib compression might change the relative offsets between the packed objects), to generate pack indexes from pack files, or to expand the filesystem of a given commit. The library comes with some command-line tools called ogit-* as a Proof-of-concept of the core library which shares a similar interface with git, but where all operations are mapped to the API exposed by ocaml-git (and hence using only OCaml code). However, these tools are not meant to be used. They are just examples of how to use ocaml-git. ocaml-git wants to be a low-level library for irmin. By this fact, high-level commands such as a (patience) diff, git status, etc. are not implemented. As a MirageOS project, ocaml-git is system agnostic. However, it provides a git-unix package which uses UNIX syscall and is able to introspect a usual Git repository in a filesystem. However, ocaml-git handles only Git objects and does not populate your filesystem as git does. For example, Git_unix.Sync.fetch does not give you files fetched from the repository but only synchronizes .git with that repository. The API documentation is available online. Build, Install Instructions and Packages To build and install the project, simply run: $ opam install git $ opam install git-unix $ opam install git-mirage Linking-trick ocaml-git uses 2 libraries with the linking-trick: These libraries provide a C implementation and an OCaml implementation (mostly to be compatible with js_of_ocaml). However, utop or any a build-system such as ocamlbuild are not able to choose between these implementations. So, you must explicitely choose one. These libraries use virtual-library available with dune. If your build-system is dune, you should not have any problem about that where dune is able to take the default implementation of these libraries. What is supported The loose object files can be read and written; commits (revision history) references (branch names) The PACK files (collections of compressed loose objects using a binary-diff representation) and PACK indexes (indexes of pack files) can be read and written). The binary diff hunks are exposed using a high-level position-independent representation so that they can be manipulated more easily. Pack file can be created and is compressed. The INDEX file (used as for managing the staging area) are fully supported, which means that git diffand git statuswill work as expected on a repository created by the library. This feature is only available for git-unixwhen it needs to introspect a file-system. Cloning and fetching (using various options) are fully supported for the Git protocol, the smart-HTTP protocol and git+ssh. A subset of the protocol capabilities are implemented (mainly thin-pack, ofs-delta, side-band-64kand allow-reachable-sha1-in-want). Pushing is still experimental and needs more testing. An abstraction for Git Store Is available. Various store implementations are available: An in-memory implementation; A unix filesystem implementation; What is not supported No server-side operations are currently supported. No GC. Updates, merge and rebase are not supported. Use irmin instead. Performance Performance is comparable to the Git tool. Example This utop example must run into the ocaml-git repository when the given path is .. # ;; load necessary modules # #require "checkseum.c" ;; # #require "digestif.c" ;; # #require "git-unix" ;; # ;; we are going to use this project's local repository # module Store = Git_unix.Store ;; module Store = Git_unix.Store # ;; this module is useful for finding git objects in a git store # module Search = Git.Search.Make (Digestif.SHA1) (Store) ;; module Search : sig type hash = Store.hash type store = Store.t type pred = [ `Commit of hash | `Tag of string * hash | `Tree of string * hash | `Tree_root of hash ] val pred : store -> ?full:bool -> hash -> pred list Lwt.t type path = [ `Commit of path | `Path of string list | `Tag of string * path ] val mem : store -> hash -> path -> bool Lwt.t val find : store -> hash -> path -> hash option Lwt.t end # ;; we want to read the contents of a blob under name [filename] # let read filename = let open Lwt_result.Syntax in (* get store located in current root's .git folder *) let* store = Store.v (Fpath.v (Sys.getcwd ())) in (* find obj-id pointed at by master branch (reference) *) let* commit_id = Store.Ref.resolve store Git.Reference.master in let open Lwt.Syntax in (* find obj-id of of [filename] as a git blob *) let* blob_id = Search.find store commit_id (`Commit (`Path [ filename ])) in match blob_id with | None -> Lwt.return (Error (`Not_found commit_id)) | Some hash -> (* read contents of the blob *) Store.read store hash ;; val read : string -> (Store.Value.t, Store.error) Lwt_result.t = <fun> # let pp = let ok ppf = function | Git.Value.Blob b -> Fmt.string ppf (Git.Blob.to_string b) | _ -> Fmt.string ppf "#git-object" in Fmt.result ~ok ~error:Store.pp_error;; val pp : ('_weak1 Git.Value.t, Store.error) result Fmt.t = <fun> # Lwt_main.run Lwt.Infix.(read "README.md" >|= pp Fmt.stdout) ;; ocaml-git --). [...] () License MIT, see LICENSE.md file for its text. sha256=a3ab287daa3f1163be610f9fb7d327e9ab00cf43f5a38cb5ccd02d7278e21749 sha512=f627ca0aaad112c08831280d44edcaf7324e9e6d48a7fe13cb0d54bb2385b038bbd5d8d334a93b7a56087618debea289d6f99010e1ed09b88c95ac38c8d3d641 >= "2.0.1" & with-test >= "0.7.0" & with-test >= "1.2.3" & with-test >= "2.5.4" < "3.0.0" >= "2.5.1" & < "2.6.0"
https://ocaml.org/p/git-cohttp-mirage/3.3.3
CC-MAIN-2022-27
refinedweb
938
60.21
This documentation is archived and is not being maintained. How to: Read a Text File (C++/CLI) Visual Studio 2010 The following code example demonstrates how to open and read a text file one line at a time. This is accomplished with the StreamReader class defined within the System.IO namespace. An instance of this class is used to open a text file and then the StreamReader.ReadLine method is used to retrieve each line. This code can be used with any file named textfile.txt that contains text or with the file generated in How to: Write a Text File (C++/CLI). // text_read.cpp // compile with: /clr #using<system.dll> using namespace System; using namespace System::IO; int main() { String^ fileName = "textfile.txt"; try { Console::WriteLine("trying to open file {0}...", fileName); StreamReader^ din = File::OpenText(fileName); String^ str; int count = 0; while ((str = din->ReadLine()) != nullptr) { count++; Console::WriteLine("line {0}: {1}", count, str ); } } catch (Exception^ e) { if (dynamic_cast<FileNotFoundException^>(e)) Console::WriteLine("file '{0}' not found", fileName); else Console::WriteLine("problem reading file '{0}'", fileName); } return 0; } Other Resources Show:
https://msdn.microsoft.com/en-us/library/y52yxde8(v=vs.100).aspx
CC-MAIN-2017-47
refinedweb
183
66.44
Custom Site Settings Sometimes you may need to persist some global settings (eg. license code, service login, default width etc.) to be reused across your module. Orchard makes it really simple and I'll show you how to do it. Basically, there are two scopes you can define your settings in: - Site scope - for global site settings. - Content type scope - for settings common to all items of a given type (eg. a Page, a Blog, a BlogPost and so on). Defining site scope settings (Orchard 1.8 Onwards) Orchard 1.8 drastically simplifies creation of site settings, removing the previous need for "Part Records" and migration files. To create new site settings for your module you now only need three classes; A ContentPart, a Handler and potentially a view file if you want the settings to be edited via the "Site Settings" area of Admin. For a real world example look for the RegistrationSettingsPart, RegistrationSetttingsPartHandler and Users.RegistrationSettings.cshtml files in the Orchard.Users module. The Content Part public class ShareBarSettingsPart : ContentPart { public string AddThisAccount { get { return this.Retrieve(x=> x.AddThisAccount); } set { this.Store(x=> x.AddThisAccount, value); } } } The Handler [UsedImplicitly] public class ShareBarSettingsPartHandler : ContentHandler { public ShareBarSettingsPartHandler() { T = NullLocalizer.Instance; Filters.Add(new ActivatingFilter<ShareBarSettingsPart>("Site")); Filters.Add(new TemplateFilterForPart<ShareBarSettingsPart>("ShareBarSettings", "Parts/ShareBar.ShareBarSettings", "Modules")); } public Localizer T { get; set; } protected override void GetItemMetadata(GetContentItemMetadataContext context) { if (context.ContentItem.ContentType != "Site") return; base.GetItemMetadata(context); context.Metadata.EditorGroupInfo.Add(new GroupInfo(T("Modules"))); } } The View > Using site scope settings Accessing your site setting is a simple one liner: var shareSettings = _services.WorkContext.CurrentSite.As<ShareBarSettingsPart>(); Where _services is the IOrchardServices object (eg. injected in the constructor). Defining site scope settings (Pre-Orchard 1.8) Defining custom site scope settings for before Orchard 1.8 can be in Adding Custom Settings pre Orchard 1.8 Defining settings for Content Types We're now going to create settings and defaults wired with specific content type (like Page, User, Blog etc.). This looks much different comparing to the previous one, but also requires less coding. There are just two classes and one shape involved and that's all. As before, we'll use the simplified examples taken from the Orchard Sharing project. The goal: (GET) and the second one when the edit form data gets posted (POST). Unlike the generic content part drivers, this class is not bound to the specific content type (as the content types are just a list of names for a collection of parts), so each of the methods we just defined will be called for every content type and for every part. This is why the yield break statement is used -.. renders the dropdown list so user can choose one of the predefined Modes. Model.AvailableModes contains the available ones: we populated the property with appropriate data in TypePartEditor method above. Hooray, we're done!_1<< Using settings for content type As for site-scoped settings, this section.
http://docs.orchardproject.net/en/latest/Documentation/Adding-custom-settings/
CC-MAIN-2017-51
refinedweb
492
50.84
getcchar - get a wide character string and rendition from a cchar_t #include <curses.h> int getcchar(const cchar_t *wcval, wchar_t *wch, attr_t *attrs, short *color_pair, void *opts); When wch is not a null pointer, the getcchar() function extracts information from a cchar_t defined by wcval, stores the character attributes in the object pointed to by attrs, stores the colour pair in the object pointed to by color_pair, and stores the wide character string referenced by wcval into the array pointed to by wch. When wch is a null pointer, getcchar() obtains the number of wide characters in the object pointed to by wcval and does not change the objects pointed to by attrs or color_pair. The opts argument is reserved for definition in a future edition of this document. Currently, the application must provide a null pointer as opts. When wch is a null pointer, getcchar() returns the number of wide characters referenced by wcval, including the null terminator. When wch is not a null pointer, getcchar() returns OK upon successful completion, and ERR otherwise. No errors are defined. The wcval argument may be a value generated by a call to setcchar() or by a function that has a cchar_t output argument. If wcval is constructed by any other means, the effect is unspecified. attroff(), can_change_color(), setcchar(), <curses.h>.
http://www.opengroup.org/onlinepubs/007908799/xcurses/getcchar.html
crawl-001
refinedweb
219
51.58
Although Bill Gates says it won’t happen, here are 10 things I’d like to see if Microsoft would acquire Disney: 10. Free passes to Epcot Center for all MSDN Universal subscribers. 9. Scoble exchanges butterfly suit for Sorcerer’s Apprentice Mickey outfit. 8. Windows ships with “It’s A Small World After All” as startup wav file. 7. Office Assistants Clippy the paperclip and Kairu the dolphin replaced by Grumpy and Sneezy dwarves. [“For crying out loud, you still need help with the mail-merge?”] 6. “Tomorrowland” attraction replaced with “Longhornland”. 5. “Honey I Shrunk The Kids Movie Set Adventure” replaced by “Windows Build Lab and War Room adventure”. 4. Wrong memo sent to Magic Kingdom security guards - incoming patrons inspected for buffer overflows. 3. The namespace System.Drawing.Animations.ToyStory 2. Free theme park advocates build the amusement park “Lisney”. Patrons disappointed when they need to edit /etc/lilo.conf with vi before gaining entrance to the “Rural Bear Jamboree”. 1. .NET Rocks! on the Disney Channel. <br>then they can change the microsoft name to <br> MICKEYSOFT. <br>Then I can buy a copy of mickeysoft windows <br>XP-pro with service pack toystory2 :) <br>
http://odetocode.com/Blogs/scott/archive/2004/03/04/if-microsoft-would-only-buy-disney.aspx
CC-MAIN-2014-35
refinedweb
199
68.16
#include <wx/thread.h> wxSemaphore is a counter limiting the number of threads concurrently accessing a shared resource. This counter is always between 0 and the maximum value specified during the semaphore creation. When the counter is strictly greater than 0, a call to wxSemaphore::Wait() returns immediately and decrements the counter. As soon as it reaches 0, any subsequent calls to wxSemaphore::Wait block and only return when the semaphore counter becomes strictly positive again as the result of calling wxSemaphore: wxSemaphore::Wait and each time a room is freed it should be released by calling wxSemaphore::Post. Specifying a maxcount of 0 actually makes wxSemaphore behave as if there is no upper limit. If maxcount is 1, the semaphore behaves almost as a mutex (but unlike a mutex it can be released by a thread different from the one which acquired it). initialcount is the initial value of the semaphore which must be between 0 and maxcount (if it is not set to 0). Destructor is not virtual, don't use this class polymorphically. Increments the semaphore count and signals one of the waiting threads in an atomic way. Returns wxSEMA_OVERFLOW if the count would increase the counter past the maximum. Wait indefinitely until the semaphore count becomes strictly positive and then decrement it and return.
http://docs.wxwidgets.org/3.0/classwx_semaphore.html
CC-MAIN-2018-34
refinedweb
218
60.45
Asked by: Export the Workflow details history Hi, I have difficulties in export the workflow details history(which usually on the third page) together with all the data on the first page. For example, from one list, there are about 100 issues listed and each issues have its own workflow details history. If we want to view the history, we need to click on each issues to see the details. My questions is when we export(to spreadsheet) all the issues, I want the workflow history details (e.g. event type, date occured) to be exported together for each issues from the list. I tried to edit the view but couldn't find out the workflow history attribute/column. Please advise if I need to use additional tools/script for this tasks or there is a way underneath the template which I missed out. Man thanks. Question All replies There is a hidden list in your site called "Workflow History"; history for all workflows on your site get logged there. What you see on the "third page" is this list filtered by workflow instance id. To work with this list like you would with any other list, you need to simply unhide it. You could use the following C# code to make the list visible.Code Snippet using(SPSite site= new SPSite("")) { SPList list = site.RootWeb.Lists["Workflow History"]; list.Hidden=false; list.Update(); } Hi Unclepaul, Really appreciate your answer. I have searched inside the sharepoint to locate "Workflow history" button, but couldn't find it . Can you guide me on where to get this button, and also where to put this code (since I'm still in new to sharepoint world). Do I need to use other tools which can communicate with sharepoint to put this code? And also do I need to do something on the server side? Kindly advise. Many thanks. You would need to Visual Studio 2005 intalled on machine with a sharepoint instance. Below are the steps you can follow: Code Snippet - Create Console Application project - Add reference to Microsoft.Sharepoint.dll - Replace contents of the class with code from below - Replace with url to ur site - Compile and run on machine where your production sharepoint instance is installed - Login to ur site, go to View All Site Contents, you should now see Workflow History list using System; using System.Collections.Generic; using Microsoft.SharePoint; namespace MyNamespace { static class Program { /// <summary> /// The main entry point for the application. /// </summary> [STAThread] static void Main() { SPSecurity.RunWithElevatedPrivileges(delegate() { using (SPSite site = new SPSite("")) { SPList list = site.RootWeb.Lists["Workflow History"]; list.Hidden = false; list.Update(); } }); } } }
http://social.msdn.microsoft.com/Forums/en-US/9a4284ac-128a-4ac6-9b26-e8deaf9d87bf/export-the-workflow-details-history?forum=sharepointcustomizationlegacy
CC-MAIN-2014-15
refinedweb
438
64.51
So now here we are, having tried to handle errors in Google App Engine...and failed all because silly DeadlineExceededError jumps over Exception in the inheritance chain and goes right for BaseException. How can we catch these in our handlers while staying Pythonic? First and foremost, in the case of a timeout, we need to explicitly catch a DeadlineExceededError. To do so, we can use a decorator (hey, that's Pythonic) in each and every handler for each and every HTTP verb. (Again, prepare yourselves, a bunch of code is about to happen. See the necessary imports at the bottom of the post.) def deadline_decorator(method): def wrapped_method(self, *args, **kwargs): try: method(self, *args, **kwargs) except DeadlineExceededError: traceback_info = ''.join(format_exception(*sys.exc_info())) email_admins(traceback_info, defer_now=True) serve_500(self) return wrapped_method Unfortunately, having to manually decorate all the functions is not so Pythonic. At this point I was stuck and wanted to give up, but asked for some advice on G+ and actually got what I needed from the all knowing Ali Afshar. What did I need? Before showing the super simple metaclass I wrote, you need to know one thing from StackOverflow user Kevin Samuel: The main purpose of a metaclass is to change the class automatically, when it's created. With the __new__ method, the type object in Python actually constructs a class (which is also an object) by taking into account the name of the class, the parents (or bases) and the class attritubutes. So, we can make a metaclass by subclassing type and overriding __new__: class DecorateHttpVerbsMetaclass(type): def __new__(cls, name, bases, cls_attr): verbs = ['get', 'post', 'put', 'delete'] for verb in verbs: if verb in cls_attr and isinstance(cls_attr[verb], function): cls_attr[verb] = deadline_decorator(cls_attr[verb]) return super(DecorateHttpVerbsMetaclass, cls).__new__(cls, name, bases, cls_attr) In DecorateHttpVerbsMetaclass, we look for four (of the nine) HTTP verbs, because heck, only seven are supported in RequestHandler, and we're not that crazy. If the class has one of the verbs as an attribute and if the attribute is a function, we decorate it with deadline_decorator. Now, we can rewrite our subclass of RequestHandler with one extra line: class ExtendedHandler(RequestHandler): __metaclass__ = DecorateHttpVerbsMetaclass def handle_exception(self, exception, debug_mode): traceback_info = ''.join(format_exception(*sys.exc_info())) email_admins(traceback_info, defer_now=True) serve_500(self) By doing this, when the class ExtendedHandler is built (as an object), all of its attributes and all of its parent classes (or bases) attributes are checked and possibly updated by our metaclass. And now you and James Nekbehrd can feel like a boss when your app handles errors. Imports: from google.appengine.api import mail from google.appengine.ext.deferred import defer from google.appengine.ext.webapp import RequestHandler from google.appengine.runtime import DeadlineExceededError import sys from traceback import format_exception from SOME_APP_SPECIFIC_LIBRARY import serve_500 from LAST_POST import email_admins Pythonic: An idea or piece of code which closely follows the most common idioms of the Python language, rather than implementing code using concepts common to other languages. Notes: Using grep -r "Exception)" . | grep "class "I have convinced myself (for now) that the only errors App Engine will throw that do not inherit from Exceptionare DeadlineExceededError, SystemExit, and KeyboardInterruptso that is why I only catch the timeout. You can also use webapp2to catch 500 errors, even when handle_exceptionfails to catch them. Disclaimer: Just because you know what a metaclass is doesn't mean you should use one. - "Don't do stuff like this though, what is your use case?" - Ali Afshar - "Metaclasses are deeper magic than 99% of users should ever worry about. If you wonder whether you need them, you don't (the people who actually need them know with certainty that they need them, and don't need an explanation about why)." - Python Guru Tim Peters - "The main use case for a metaclass is creating an API." -Kevin Samuel
https://blog.bossylobster.com/2011/11/python-metaclass-for-extra-bad-errors.html
CC-MAIN-2016-36
refinedweb
642
52.49
From: Joel de Guzman (joel_at_[hidden]) Date: 2004-07-25 22:27:35 Gennadiy Rozental wrote: >>FYI, latest changes to the aforementioned >>file seem to break things for msvc-stlport, >>as shown for instance at: > > > Dave tried to use "using namespace std" solution. It's already fixed in cvs. BTW, you need to special case for msvc-stlport to avoid ambiguity. Here's how I do it: #if defined(BOOST_MSVC) && (BOOST_MSVC < 1300) #define BOOST_SPIRIT_USING_STD #else #define BOOST_SPIRIT_USING_STD using namespace std #endif Which works with or without stl-port. Tested! Cheers, -- Joel de Guzman Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2004/07/69129.php
CC-MAIN-2021-43
refinedweb
115
67.45
Data extraction involves pulling data from different sources and converting it into a useful format for further processing or analysis. It is the first step of the Extract-Transform-Load pipeline (ETL) in the data engineering process. Data is not always available as a CSV file. Sometimes you will need to combine data that is available in multiple file formats such as JSON, XML, and SQL. In this tutorial, we will use python libraries such as pandas, json, and requests to read data from different sources and load them into a Jupyter notebook as a pandas dataframe. 1. CSV files This refers to a ‘ comma-separated values’ file that is used to store data in a tabular format, similar to a spreadsheet. Each line in the file is an observation (or record), and each record has one or more features separated by commas. Often, not always, the first line represents the feature names. We will use the math results data from the data.gov website. Before we create a pandas data frame from the data, it is important to first peek into the file using python’s inbuilt reading library. The following code displays the first 5 lines of the downloaded file. What to look out for: - Whether we will need extra arguments with pd.read_csv() - If the first line contains the variable names. If not, we would use pd.read_csv(file, header=None). - If there are any empty rows at the top we need to skip using pd.read_csv(file, skip_rows=n). The code below downloads the file directly from the website and creates a Pandas dataframe. file_link = '' df_csv = pd.read_csv(file_link) 2. XML data Extensible Markup Language, or XML, is a markup language very similar to HTML. You can recognize XML formatted data because it’s marked down by unique tags specific to the data they represent, unlike the pre-defined tags of HTML. We will continue using the math results data from the data.gov website which provides an option to download the data in an XML format. Running the code below in a Jupyter notebook will print the first 20 lines. To read XML data, we use a python library called BeautifulSoup. To install BeautifulSoup, paste the code below into the Jupyter notebook. !pip install bs4 Then import the library, open the file and load its contents into BeautifulSoup. from bs4 import BeautifulSoupwith open("math results.xml") as f: soup = BeautifulSoup(f, "lxml") From the XML data, we will use the find_all() method of BeautifulSoup that returns the tags as a list. For our case, the tags of interest are ‘district’, ‘grade’, ‘number_tested’, and ‘mean_scale_score’. We will then loop through the rows, and for each tag, retrieve the value inside it using the get_text method. The list of lists named ‘data’ represents all our records. The next step is to create a pandas dataframe and pass the columns that will be used for the data. df_xml = pd.DataFrame(data, columns=['district', 'grade','number_tested', 'mean_scale_score']) 3. JSON data JavaScript Object Notation, or JSON, compacts data in a dictionary-style format. The json file for the math results is available at this link. Below is a screenshot of the file using a code editor. JSON was developed after XML but has become more popular over time. The two formats are frequently used for sending data via web APIs which are discussed in point 5 below. Pandas library offers a simple way to read in a json file; pd.read_json(). You can pass the orient option depending on how the data is formatted. The pandas’ documentation explains the orient options. Our json data is formatted slightly different and pd.read_json() returns an error. It has a lot of meta-information, almost 920 lines. The actual data starts at the 921st line. The image below shows a screenshot of the data using a notepad++ code editor. Note that the first 8 fields per row are useless. We will use the json library. This library reads in the json data as a dictionary. import jsonwith open('math results.json') as f: json_data = json.load(f)type(json_data) ###dict In our case, there will be 2 dictionary entries; meta and data. We can therefore get the data information using dict.get(key). data = json_data.get('data') data = [x[8:] for x in data] The left image below shows our data as a list of lists. The right image uses list comprehension to return only items from the 8th element in each list. AI/ML Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot
https://ramseyelbasheer.io/2021/06/08/what-is-data-extraction-python-review-of-json-xml-apis-sql-and-csv-formats/
CC-MAIN-2021-25
refinedweb
768
66.54
Capturing console messages to a log - BetterAuto Pybytes Beta last edited by BetterAuto I want to capture all console output such as debug messages and uncaught exceptions to a log file. I created the following code but it's not working. It does grab the console as expected. Also, if I comment out dupterm I can console.write("Some string") and it gets added to the log. But if I raise an exception it just quits the script without adding anything to the log. Please assist. import time from os import dupterm from uio import StringIO from _thread import start_new_thread def log_console(): console = StringIO() dupterm(console) while True: logH = open('/flash/console.log', 'w+') logH.write(console.getvalue()) logH.close() time.sleep(1) start_new_thread(log_console, ()) Edit: Oops, this question is almost a duplicate. Forgot that I had asked. - svonbentzel last edited by sys.print_exception, that might be able to help with printing out exceptions and traceback - BetterAuto Pybytes Beta last edited by Afraid not. @BetterAuto: Did you find a working solution? I'm hitting the same problem.
https://forum.pycom.io/topic/1523/capturing-console-messages-to-a-log/5
CC-MAIN-2019-39
refinedweb
177
68.26
Objective works? While using the Printing support classes, eventually Silver Light application opens the Printing Dialog box like below. What is new in Silver Light 4.0 for printing support? Answer is PrintDocument class. This class provides events and function to achieve Printing in Silver Light application. PrintDocument class - This class is inside namespace System.Windows.Printing. - There are three events to handles printing. They are EndPrint , StartPrint , PrintPage. - There is one method called Print() to do the print operation. This method takes no input parameter. - There is a property DocumentName . How to use this class? - Create instance of PrintDocument class. - Specify the content to print. - Write code to handle the PagePrint event. - Entire SilverLight control can be printed by setting PageVisual to layout. - Portion of silverlight application can be printed by setting the pagevisual tospecific control. Diving to some example I have a - Stack Panel inside a Grid - Inside Stack Panel there is one button and a text box. <Grid x:Name=”LayoutRoot” Background=”Azure”> <StackPanel Height=”auto” Width=”auto” x:Name=”stkPanel” Orientation=”Horizontal” > <TextBlock x:Name=”txtBlcok” Text=”Dhananjay Kumar” Height=”40″ Width=”175″ /> <Button x:Name=”mybutton” Content=”Print” Click=”mybutton_Click” Height=”30″ Width=”50″ /></StackPanel> </Grid> Now I need to, Print the text Box - Create instance of PrintDocument class. - Handling PrintPage event - Inside the event setting PageVisual to name of the text box. - Calling the Print() method in last. - If you want to print the entire application, just set PageVisual to any layout like stack or Grid. In this case for stack PageVisual would be set to stkPanel. Light 4.0. Thanks for reading. One thought on “Silver Light 4.0 Feature #1 Printing Support” “Silver Light 4.0 Feature #1 Printing Support | debug mode. …..” ended up being a good post. However, if it owned alot more pictures it would definitely be possibly even a lot better. Cya ,Mario
https://debugmode.net/2009/11/23/169/
CC-MAIN-2022-05
refinedweb
314
61.33
-Advertisement - OSI-Approved Open Source (29) - Creative Commons Attribution License (1) - Public Domain (1) - Linux (27) - Grouping and Descriptive Categories (26) - Windows (24) - Mac (23) - BSD (11) - Modern (9) - Android (4) - Other Operating Systems (4) Site Management Software - Link Checking - Hot topics in Site Management Softwarewamp php vertrigo vcruntime140.dll for wamp testlink admin panel in php wamp php 5.3.10 php & mysql web template vertigo wamp server phpMyTicket phpMyTicket: online box office ticket reservation for theaters, festivals or just your party. Interactive maps of seats; fully customizable front-end in Smarty; tickets as PDF files with barcode, customizable by templates; e-payment; admin interface ...16 weekly downloads XOOPS Brasil xoops brasil: themes, templates, modules, tutorial, demo, downloads.. A simple php shop script Very simple php mysql shopping cart script with minimal features. This project aims to release the most simple shopping cart php mysql script ever. With literally no bells and whistles features at all. ACMS - CMS with LiveUpdate New system CMS - Content Menager System with LiveUpdate system! It is new version of GWPE CMS system. SUPPORT DRUPAL TEMPLATE!!!! Templates download from web (integrated in CMS). IMHO Instant Blogger IMHO Instant Blogger IMHO is a Windows Forms application that permit to edit posts for your weblog offline in WYSIWYG and to publish them asyncronously. It has many powerful features. Impact-CMS CMS based on Spring framework JCMS PluginManager and Plugins The goal of this project is the development of plugins for Content Management System Jalios JCMS. This project provides a plugins' manager for JCMS 5.0.x. Loggerblogger Uberblog Loggerblogger uberblog is a Blogging software that provides a clean URL namespace, standards-compliance, and various advanced features hard to find in other blogging server software. NoraFRAME NoraCMF will be a nice Content-Management-System. It will support as many data bases as possible. Our System will contained an content-editor, integrated caching for frequent data base querys, and many more! PHP Homepage Framework The PHP Homepage Framework is a set of PHP classes that aims to eliminate the "q'n'd homepage mess" problem. Set up a basic homepage quickly - then later on decide you want a completely different layout / menu / style? No problem! SQL Filesystem Allows to mount different SQL servers as NFS server Sitellite CMS Sitellite is an advanced web based Content Management System (CMS) and PHP framework which also utilizes Apache and MySQL technology to operate. The latest Sitellite CMS project downloads are always available at.0
https://sourceforge.net/directory/internet/www/sitemanagement/license:other/license:gpl/
CC-MAIN-2017-47
refinedweb
412
54.73
These instructions tell you how to setup and program the ATtiny85 microcontroller from a Raspberry Pi via the SPI interface. Lots of people use the Ardiuno to do this (then you can use the Arduino IDE and simplified C commands), or you can use a USB based programmer. I do not have an Ardiuno and don't want to buy a dedicated programmer. I do have a Pi, so I was pleased to learn I could use it as a way to get into microcontroller programming. You will need: Raspiberry Pi ATtiny85 chip 5 x 1K resistors (or similar) LED of your choice A connection to the GPIO of the Pi, and a breadboard and wire. Based on... and... and... Teacher Notes Teachers! Did you use this instructable in your classroom? Add a Teacher Note to share how you incorporated it into your lesson. Step 1: Setup the Raspberry Pi At the terminal of the Pi: Download and build avrdude sudo apt-get install bison automake autoconf flex git gcc sudo apt-get install gcc-avr binutils-avr avr-libc git clone cd avrdude/avrdude ./bootstrap && ./configure && sudo make install Setup SPI on the GPIO sudo raspi-config and Enable SPI device in the Advanced Options (see picture) You can check this at the command line with lsmod, no need to reboot. (Maybe need to sudo modprobe spidev) Download and build WiringPi for the gpio commands cd ~ git clone git://git.drogon.net/wiringPi cd wiringPi ./build Step 2: Electrical Connections Connect up the ATtiny85 to the Raspberry Pi GPIO (wire colours from the picture are given for reference): GPIO pin ATtiny pin Comment 15 1 GPIO22 to Reset (through 1K, Blue wire) 17 8 3.3 V (Green wire) 19 5 MOSI (through 1K, Yellow wire) 21 6 MISO (through 1K, Orange wire) 23 7 SCLK (through 1K, Red wire) 25 4 GND (Brown wire) (I could not find a way to do a nice table in instructables) Step 3: Test Avrdude Connection. sudo gpio -g mode 22 out sudo gpio -g write 22 0 sudo avrdude -p t85 -P /dev/spidev0.0 -c linuxspi -b 10000 sudo gpio -g write 22 1 This must give success type messages! Step 4: Program the ATtiny85 Program the ATtiny85: cd ~ mkdir ATtiny85 cd ATtiny85 mkdir blinky cd blinky create the blinky.c file and add the following code nano blinky.c //////////////////////// #define F_CPU 1000000L #include <avr/io.h> #include <util/delay.h> int main(void) { DDRB = 0xFF; // PORTB is output, all pins PORTB = 0x00; // Make pins low to start for (;;) { PORTB ^= 0xFF; // invert all the pins _delay_ms(100); // wait some time } return 0; } //////////////////////// add this code to a Makefile file nano Makefile /////////////////////// MCU=attiny85 AVRDUDEMCU=t85 CC=/usr/bin/avr-gcc CFLAGS=-g -Os -Wall -mcall-prologues -mmcu=$(MCU) OBJ2HEX=/usr/bin/avr-objcopy AVRDUDE=/usr/local/bin/avrdude TARGET=blinky all : $(CC) $(CFLAGS) $(TARGET).c -o $(TARGET) $(OBJ2HEX) -R .eeprom -O ihex $(TARGET) $(TARGET).hex rm -f $(TARGET) install : all sudo gpio -g mode 22 out sudo gpio -g write 22 0 sudo $(AVRDUDE) -p $(AVRDUDEMCU) -P /dev/spidev0.0 -c linuxspi -b 10000 -U flash:w:$(TARGET).hex sudo gpio -g write 22 1 noreset : all sudo $(AVRDUDE) -p $(AVRDUDEMCU) -P /dev/spidev0.0 -c linuxspi -b 10000 -U flash:w:$(TARGET).hex fuse : sudo gpio -g mode 22 out sudo gpio -g write 22 0 sudo $(AVRDUDE) -p $(AVRDUDEMCU) -P /dev/spidev0.0 -c linuxspi -b 10000 -U lfuse:w:0x62:m -U hfuse:w:0xdf:m -U efuse:w:0xff:m sudo gpio -g write 22 1 clean : rm -f *.hex *.obj *.o /////////////////////// (Sorry, the instructables text editor has destroyed all the tabs and spacing in the code above, the Makefile will not work without tabs in the correct place) To compile type: make To compile and upload code to the ATtiny: make install To optionally send fuses: make fuse This program oscillates ALL 5 pins as outputs so the LED should flash. We are programming the ATtiny directly using avr-libc. To change the setup of the ATtiny get fuses from : 10 People Made This Project! Golom made it! David M2 made it! ahernandez5 made it! keaaw made it! molle17031976 made it! HenricD made it! 斐郑 made it! spirail made it! mva111 made it! ajohnson134 made it! See 1 More 32 Discussions 10 months ago on Step 4 What a great article - many thanks for sharing. I sorted the tabs by looking at the screen dump of step 4. 3 years ago I am lost when it comes to coding... Where should the TABS go? Can someone please post a picture of the correct code? Reply 3 years ago Hi, I have the code on Github now:... Cheers! 3 years ago Thankyou so much! I had purchased 10x ATtiny84a's and had no idea, until I stumbled upon this. Much Appreciated:) 3 years ago Hi, this helped me to get going, but you should update/change one thing. After reading this post:-... I noticed that the 'linuxspi' programmer by default uses the GPIO pin 25 for reset, so your reset wrapper script is not needed. Else, great work ;) 3 years ago Outstanding!! 4 years ago Thank you. 4 years ago on Introduction! Reply 4 years ago on Introduction Oh, and I used this with an ATmega88; I'm sure you can use it with any ATmega that supports 3.3v operation, which, for all I know, is all of them. 4 years ago on Introduction Works with atmega8. When I poweroff the Raspberry Pi the Reset pin GPIO 22 gets to zero, stopping the program on the microcontroller. I added a pullup resistor of 10 k to 3.3 v to GPIO22 and now it works after Pi poweroff too :) . Thank you. I made it but I don't yet have a camera / photo capable phone so no images... 4 years ago Is it possible to program Atmega8-8PU using this? 4 years ago on Introduction Thank you! Thank you! Thank you! My L.E.D.s are flashing, and my stepper motor is spinning. I could not have gotten this far this fast without your VERY helpful Instructable! 4 years ago i dont know nothing about makefile, could you help me, which is the path where the file should be, i have to paste or create a new file ? 4 years ago hi im having this error"28:*** target pattern contains no '%' .Stop Can anyone help me with this.... 4 years ago on Introduction. Reply 4 years ago on Introduction? 4 years ago on Introduction I try to upload this code instead of blinky but i got a lot of error and i don't know why if someone can help me... from... PS : It works perfectly when i upload blinky Reply 4 years ago on Introduction. Reply 4 years ago on Introduction Thanks for the help i understand why it doesn't work now. I gonna try to directly upload the hex generate by arduino and if it doesn't work i gonna use your code to make mine. Thanks again 5 years ago
https://www.instructables.com/id/Programming-the-ATtiny85-from-Raspberry-Pi/
CC-MAIN-2019-43
refinedweb
1,190
73.78
17 February 2012 02:39 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> The aromatics unit can produce 220,000 tonnes/year of benzene, 250,000 tonnes/year of toluene and 680,000 tonnes/year of isomer-grade xylene (IX), the source said. The company will also shut its 530,000 tonne/year paraxylene (PX) plant at The company will raise the PX unit's operating rate from 80% to 100% after maintenance is completed, the source said. ZRCC has purchased about 10,000 tonnes of PX for loading in March and is expected to continue purchasing in the spot market in order to supply PX to downstream purified terephthalic acid (PTA) producers,
http://www.icis.com/Articles/2012/02/17/9533091/chinas-zrcc-to-shut-aromatics-px-units-for-maintenance.html
CC-MAIN-2014-42
refinedweb
112
55.07
This has been mentioned before -- at least by myself, perhaps by others, too -- but the Subversion folks would *really* like to see APR grow a gimme-a-temp-directory function. The following is the code portion from Python2.2's tempfile module whereby a list of places to try as valid temp directories is assembled: attempdirs = ['/tmp', '/var/tmp', '/usr/tmp', pwd] if os.name == 'nt': attempdirs.insert(0, 'C:\\TEMP') attempdirs.insert(0, '\\TEMP') elif os.name == 'mac': import macfs, MACFS try: refnum, dirid = macfs.FindFolder(MACFS.kOnSystemDisk, MACFS.kTemporaryFolderType, 1) dirname = macfs.FSSpec((refnum, dirid, '')).as_pathname() attempdirs.insert(0, dirname) except macfs.error: pass elif os.name == 'riscos': scrapdir = os.getenv('Wimp$ScrapDir') if scrapdir: attempdirs.insert(0, scrapdir) for envname in 'TMPDIR', 'TEMP', 'TMP': if os.environ.has_key(envname): attempdirs.insert(0, os.environ[envname]) Once the list is built, Python simply starts at the head of the list (sorry, the "sequence") and checks each path until it finds one in which it can successfully create, write to (the string "blat", not that it matters), and remove a file. If no valid directory exists in the list, an exception is thrown and the caller is just out-of-luck. I seem to recall that at one point or another, someone was looking into the Windows side of this functionality (wrowe?). If no one has any objections, I'd like to try to implement the same algorithm that Python uses in APR. I don't know how os2 and netware fit into the picture -- I'd need some guidance there. As for location and naming, perhaps apr_filepath_temp() would cause the least disturbance (since all 4 OS-directories under file_io/ have filepath.c files). Thoughts? Volunteers to do this for me? :-)
http://mail-archives.apache.org/mod_mbox/apr-dev/200211.mbox/%3Cm3znsd77xf.fsf@localhost.localdomain%3E
CC-MAIN-2015-11
refinedweb
292
58.79
Known issues with Microsoft Graph This article describes known issues with Microsoft Graph. For information about the latest updates, see the Microsoft Graph changelog. Users No instant access after creation Users can be created immediately through a POST on the user entity. An Office 365 license must first be assigned to a user, in order to get access to Office Office 365 unified API preview, or. Microsoft Teams GET /teams and POST /teams are not supported See list all teams and list your teams to get a list of teams. See create team for creating teams. Missing teams in list all teams Some teams that were created in the past but haven't been used recently by a Microsoft Teams user aren't listed by list all teams. New teams will be listed. Certain old teams don't have a resourceProvisioningOptions property that contains "Team", which is set on newly created teams and teams that are visited in Microsoft Teams. In the future, we will set resourceProvisioningOptions on existing teams that have not been opened in Microsoft Teams. Examples of group features that support only delegated permissions: - Group conversations, events, photo - External senders, accepted or rejected senders, group subscription - User favorites and unseen count Policy Using Microsoft Graph to create and name an Office 365 group bypasses any Office 365 group policies that are configured through Outlook Web App. Adding and getting attachments of group posts Adding attachments to group posts, listing and getting attachments of group posts currently return the error message "The OData request is not supported." A fix has been rolled out for both the /v1.0 and /beta versions, and is expected to be widely available by the end of January 2016. Accessing a shared calendar When attempting to access events in a calendar that has been shared by another user using the following operation: GET \users('{id}')\calendars('{id}')\events You may get HTTP 500 with the error code ErrorInternalServerTransientError. The error occurs because: - Historically, there are two ways that calendar sharing has been implemented, which, for the purpose of differentiating them, are referred to as the "old" approach and "new" approach. - The new approach is currently available for sharing calendars with view or edit permissions, but not with delegate permissions. - You can use the calendar REST API to view or edit shared calendars only if the calendars were shared using the new approach. - You cannot use the calendar REST API to view or edit such calendars (or their events) if the calendars were shared using the old approach. If a calendar was shared with view or edit permissions but using the old approach, you can now work around the error and manually upgrade the calendar sharing to use the new approach. Over time, Outlook will automatically upgrade all shared calendars to use the new approach, including calendars shared with delegate permissions. To manually upgrade a shared calendar to use the new approach, follow these steps: - The recipient removes the calendar that was previously shared to them. - The calendar owner re-shares the calendar in Outlook on the web, Outlook on iOS, or Outlook on Android. - The recipient re-accepts the shared calendar using Outlook on the web. (It will be possible to use other Outlook clients soon.) - The recipient verifies that the calendar has been re-shared successfully using the new approach by being able to view the shared calendar in Outlook on iOS or Outlook on Android. A calendar shared with you in the new approach appears as just another calendar in your mailbox. You can use the calendar REST API to view or edit events in the shared calendar, as if it's your own calendar. As an example: GET \me\calendars('{id}')\events. onlineMeetingUrl property support for Microsoft Teams Currently, the onlineMeetingUrl property of a Skype meeting event would indicate the online meeting URL. However, that property for a Microsoft Teams meeting event is set to null. Calls and online meetings Note Calling and online meetings are currently in preview and are available only in the Microsoft Graph beta endpoint. - Navigation path /applications/{id}is not supported. Navigating through the global applications node to the application, even your own, is not allowed. Please use the /appnavigation only.Folder/. Drives, files and content streaming - First time access to a user's personal drive through the Microsoft Graph before the user accesses their personal site through a browser leads to a 401 response. Query parameter limitations - Multiple namespaces are not supported. - GETs on $refand casting. Delta query - OData context is sometimes returned incorrectly when tracking changes to relationships. - Schema extensions (legacy) are not returned with $select statement, but are returned without $select. - Clients cannot track changes to open extensions or registered schema extensions. Application and servicePrincipal API changes. Filtering on schema extension properties not supported on all entity types Filtering on schema extension properties (using the $filter expression) is not supported for Outlook entity types - contact, event, message, or post., please refer to the Microsoft Graph or Azure AD Graph blog post on the features that are only available through Azure AD Graph API. Feedback Send feedback about:
https://docs.microsoft.com/en-us/graph/known-issues?toc=./ref/toc.json&view=graph-rest-1.0
CC-MAIN-2019-22
refinedweb
855
52.6
Abstract A proposal for a new Atom construct, "template" whose contents are opaque to the protocol but can be interpreted by clients and servers using the media type of the template. Status New Rationale Many content management systems also allow the management of templates which determine how content will be transformed into target file formats such as HTML, XML, RDF, PostScript, plain text, and others. This proposal models templates in a simple and general way. For many publishers, editing templates is of a piece with editing content: both are simply ways to update a website or other media product. Providing clients with a way to manipulate templates allows them to offer their publisher-users more complete control over those products. Proposal Additions to the format document X.Y Template Construct A Template construct is an element whose content is a template in some templating language, which can be transformed into an output byte stream in a particular format. The content of the element constitutes the "template." X.Y.Z1 "name" attribute A template construct MAY contain a "name" attribute, which contains a human-readable name for the template, allowing it to be distinguished from others in a list. The value of a name attribute is plain text. X.Y.Z2 "type" attribute A media type identifying the template language in which the template should be interpreted. [[ stuff about what media types make sense here ]] X.Y.Z3 "mode" Attribute Template constructs MAY have a "mode" attribute, whose value indicates the method used to encode the content. When present, this attribute's value MUST be listed below. If not present, processors MUST behave as if its value were "xml". "xml": A mode attribute with the value "xml" indicates that the element's content is inline xml (for example, namespace-qualified XHTML). "escaped": A mode attribute with the value "escaped" indicates that the element's content is escaped, as defined in [W3C.REC-xml-20040204] (section 4.6). Processors MUST unescape the element's content before considering it as content of the indicated media type. Additions to the protocol document U.V Operations on Templates A client can fetch or update a template construct through HTTP requests in the Atom protocol. [[ describe GET / PUT or SOAP-style ]]. A server MAY deny a template update operation based on its attributes or content by returning a status code of 409 Conflict. Impacts Notes Still open is the question of how a client can discover the set of templates stored by the Atom server. An alternative would be to re-use the Content construct for templates: PaceTemplatesAreContentConstructs.
http://www.intertwingly.net/wiki/pie/PaceTemplateConstruct
crawl-003
refinedweb
432
54.83
Detect movement, then scare the heck out of an intruder with a high pitched alarm sounds and flashing lights. Does that sound fun? Of course it does. That’s the goal of today’s Arduino project, suitable for beginners. We’ll be writing completely from scratch and testing as we go along so you can hopefully get some idea of how it’s all being done rather than simply installing something I’ve already made. Disclaimer: this isn’t going to actually protect your house. It might give your sister a nasty shock when she sneaks into your room though. You’ll need: - An Arduino - Ultrasonic “ping” sensor, I’m using HC-SR04 A PIR would be better, but those are expensive. A ping sensor can be placed surreptitiously in a doorway and still serve the same basic job, and is only $5 - A piezo buzzer - LED strip light, with the same wiring we used back in this project. As you’re wiring up this project, don’t remove everything each time — just keep building on the last block. By the time you get to “Coding The Alarm System” section, you should have all the bits and pieces wired up, looking something like this: Flashing Lights Use the wiring diagram from this project to hook up your LED strip; don’t change the pins, as we need PWM output. Use this code to quickly test your wiring. If all goes well, you should have this: Distance Sensor On the SR04 module, you’ll find 4 pins. VCC and GND go to +5V rail and ground respectively; TRIG is the pin used to send a sonar signal, put this on pin 6; ECHO is used to read the signal back (and therefore calculate the distance) — put this on 7. To make things incredibly simple, there’s a library we can use called NewPing. Download and place in your Arduino’s Library folder and restart the IDE before continuing. Test using this code; open up the serial monitor and make sure the speed is set to 115200 baud. With any luck, you should see some distance measurements being send back to you at a pretty high speed. You may find a variance of 1 or 2 centimeters, but this is fine. Try running your hand in front of the sensor, moving it up and down to observe the changing readings. The code should be fairly simply to understand. There are a few declaration of relevant pins at the start, including a maximum distance – this may vary according to the exact sensor you have, but as long as you’re able to get less than 1 meter readings accurately, you should be fine. In the loop of this test app, we use the ping() function to send out a sonar ping, getting back a value in milliseconds of how long it took for the value to return. To make sense of this, we use the NewPing libraries built in constant of US_ROUNDTRIP_CM, which defines how many microseconds it takes to go a single centimeter. There’s also a 50 ms delay between pings to avoid overloading the sensor. Piezo Alarm The Piezo crystal sensor is a simple and cheap buzzer, and we can use a PWM pin 3 to make different tones. Connect one wire to pin 3, one to ground rail – it doesn’t matter which. The only way to kill the rather obnoxious and loud alarm is to pull the plugs. The code is a little complex to explain, but it involves using sine waves to generate a distinctive sound. Tweak the numbers to play with different tones. Coding The Alarm System Now that we have all the pieces of this puzzle, let’s combine them together. Go ahead and make a new sketch, called Alarm. Start by combining all the variables and pin definitions we’ve in the test examples until now. #include <NewPing.h> // Select which PWM-capable pins are to be used. #define RED_PIN 10 #define GREEN_PIN 11 #define BLUE_PIN 9 #define TRIGGER_PIN 6 // Arduino pin tied to trigger pin on the ultrasonic sensor. #define ECHO_PIN 7 // Arduino pin tied to echo pin on the ultrasonic sensor. #define MAX_DISTANCE 100 // Maximum distance we want to ping for (in centimeters). #define ALARM 3 float sinVal; int toneVal; Begin by writing a basic setup() function – we’ll only deal with the lights for now. I’ve added a 5 second delay before the main loop is started to give us some time to get out of the way if needed. void setup(){ //set pinModes for RGB strip pinMode(RED_PIN,OUTPUT); pinMode(BLUE_PIN,OUTPUT); pinMode(GREEN_PIN,OUTPUT); //reset lights analogWrite(RED_PIN,0); analogWrite(BLUE_PIN,0); analogWrite(RED_PIN,0); delay(5000); } Let’s use a helper function that allows us to quickly write a single RGB value out to the lights. //helper function enabling us to send a colour in one command void color (unsigned char red, unsigned char green, unsigned char blue) // the color generating function { analogWrite(RED_PIN, red); analogWrite(BLUE_PIN, blue); analogWrite(GREEN_PIN, green); } Finally, our loop for now is going to consist of a simple color flash between red and yellow (or, whatever you want your alarm to be — just change the RGB values). void loop(){ color(255,0,0); //red delay(100); color(255,255,0); //yellow delay(100); } Upload and test that to ensure you’re on the right track. Now, let’s integrate the distance sensor to trigger those lights only when something comes within, say, 50 cm (just less than the width of a door frame). We’ve already defined the right pins and imported the library, so before your setup() function add the following line to instantiate it: NewPing sonar(TRIGGER_PIN, ECHO_PIN, MAX_DISTANCE); // NewPing setup of pins and maximum distance. Underneath that, add a variable to store the state of the alarm being triggered or not, defaulting to false, of course. boolean triggered = false; Add a line to the setup() function so we can monitor the output on serial and debug. Serial.begin(115200); // Open serial monitor at 115200 baud to see ping results. Next, let’s rename the current loop to alarm() – this is what’s will be called if the alarm has been tripped. void alarm(){ color(255,0,0); //red delay(100); color(255,255,0); //yelow delay(100); } Now create a new loop() function, one in which we fetch a new ping, read the results, and trigger the alarm if something is detected within the meter range. void loop(){ if(triggered == true){ alarm(); } else{ delay(50);// Wait 50ms between pings (about 20 pings/sec). 29ms should be the shortest delay between pings. unsigned int uS = sonar.ping(); // Send ping, get ping time in microseconds (uS). unsigned int distance = uS / US_ROUNDTRIP_CM; Serial.println(distance); if(distance < 100){ triggered = true; } } } Let me explain the code briefly: - Start by checking to see if the alarm has been triggered, and if so, fire off the alarm function (just flashing the lights at the moment). - If it’s not triggered yet, get the current reading from the sensor. - If the sensor is reading <100 cm, something has padded the beam (adjust this value if it’s triggering too early for you, obviously). Give it a trial run now, before we add the annoying piezo buzzer. Working? Great. Now let’s add that buzzer back. Add pinMode to the setup() routine. pinMode(ALARM, OUTPUT); Then add the piezo buzzer loop to the alarm() function: for (int x=0; x<180; x++) { // convert degrees to radians then obtain sin value sinVal = (sin(x*(3.1412/180))); // generate a frequency from the sin value toneVal = 2000+(int(sinVal*1000)); tone(ALARM, toneVal); } If you try to compile at this point, you’re going to run into an error — I’ve left this in deliberately so you can see some common issues. In this case, both the NewPing and standard tone library use the same interrupts — they are conflicting basically, and there’s not a lot you can do to fix it. Oh dear. No worries though. It’s a common problem, and someone has a solution already — download and add this NewTone to your Arduino Libraries folder. Adjust the beginning of your program to include this: #include <NewTone.h> And adjust the line: tone(ALARM, toneVal); to NewTone(ALARM, toneVal); instead. That’s it. Set your alarm up in the doorway of your bedroom for the next hapless would-be burglar. Or, a dopey dog, which seemed completely unfazed by the alarm. Having trouble with the code? Here’s the complete app. If you’re getting random errors, try pasting them below and I’ll see if I can help. Image credit: Fire Alarm via Flickr how did wire all these up to aduino
http://www.makeuseof.com/tag/how-to-make-a-simple-arduino-alarm-system/
CC-MAIN-2016-22
refinedweb
1,466
69.41
Froody::Dispatch - Easily call Froody Methods use Froody::Dispatch; my $dispatcher = Froody::Dispatch->new(); my $response = $dispatcher->call( "foo.bar.baz", fred => "wilma" ); or, as a client: $client = Froody::Dispatch->new; # uses reflection to load methods from the server $client->add_endpoint( "uri" ); # look mah, no arguments! $rsp = $invoker->invoke($client, 'service.wibble'); # ok, take some arguments then. $rsp = $invoker->invoke($client, 'service.devide', divisor => 1, dividend => 2); # alternatively, args can be passed as a hashref: $args = { devisor => 1, devidend => 2 }; $rsp = $invoker->invoke($client, 'service.devide', $args); This class handles dispatching Froody Methods. It's used both from within the servers where you don't want to have to worry about the little details and as a client. Create a new instance of the dispatcher Parses a list of files, urls, modules, method filter expressions, and paths. What you're able to do: -MModule::Name (or just Module::Name) - request that that module is registered. -Ipath will inject that path into %INC filename will extract all module names from within the path foo.bar.baz will be interpreted as a filter expression. Returns a hashref of include paths, modules, urls, and filters Configures the dispatcher. Takes { filters => [], modules => [], urls => [], includes => [] } and ensures that the dispatcher only contains methods that are present in either the modules list or the urls list, and only if those modules match one or more of the filters in the filters list. If the filters list is empty, then all methods will be registered. Adds an implementation's methods to this dispatcher. Parses arguments with parse_cli, and then calls config with the arguments. This is intended to be used for parsing command line options, and directly creating configuration details Returns a dispatch object, and the parsed options. DEPRECATED: This is harmful -- you end up with random methods in your namespaces The first time this method is called it creates a default repository by trawling through all loaded modues and checking which are subclasses of Froody::Implementation. If you're running this in a mod_perl handler you might want to consider calling this method at compile time to preload all the classes. Calls $method with $invoker. If $invoker or $method are not instances of Froody::Invoker and Froody::Method respectively then this method will attempt to discover them in the registered list of endpoints and the method repository. Returns a Froody::Response object. Registers all methods from a remote repository within this one. TODO: add regex filtering of methods. Load method and errortype specifications from a named endpoint Retrieve a method Causes a dispatch to a froody method to happen. At a minimum you need to pass in a method name: my $response = Froody::Dispatch->new->dispatch( method => "foo.bar.bob" ); You can also pass in parameters: my $response = Froody::Dispatch->new->dispatch( method => "foo.bar.bob", param => { wibble => "wobble" }, ); Which repository this class uses and how errors are reported depends on the methods defined below. Call a method (optionally with arguments) and return a Froody::Response::Terse response, as described in Froody::DataFormats. This is a thin wrapper for the ->dispatch() method. Provides a list of Froody::Method objects. Optionally, the methods are filtered by a list of filter patterns. If Froody::Method::config was called with a list of filters, the methods will be pre-filtered by that list. If you wish to override the configured filters, call this method with undef, or use the repository methods directly. Get/set the repository that we're calling methods on. If this is set to undef (as it is by default) then we will use the default repository (see above.) Get/set chained accessor that sets the style of errors that this should use. By default this is response, which causes all errors to be converted into valid responses. The other option is throw which turns all errors into Froody::Error objects which are then immediatly thrown..
http://search.cpan.org/~fotango/Froody-42.041_2/lib/Froody/Dispatch.pm
CC-MAIN-2017-04
refinedweb
650
56.35
In 1637, Pierre de Fermat wrote in the margin of a book that he had a proof of his famous "Last Theorem": If $A^n + B^n = C^n$, where $A, B, C, n$ are positive integers then $n \le 2$. Centuries passed before Andrew Beal, a businessman and amateur mathematician, made his conjecture in 1993: If $A^x + B^y = C^z$, where $A, B, C, x, y, z$ are positive integers and $x, y, z$ are all greater than $2$, then $A, B$ and $C$ must have a common prime factor. Andrew Wiles proved Fermat's theorem in 1995, but Beal's offer of \$1,000,000 for a proof or disproof of his conjecture remains unclaimed. I don't have the mathematical skills of Wiles, so all I can do is write a program to search for counterexamples. I first wrote that program in 2000, and my name got associated with Beal's Conjecture, which means I get a lot of emails with purported proofs or counterexamples (many asking how they can collect their prize money). So far, all the emails have been wrong. This page catalogs some of the more common errors—including two mistakes of my own—and shows an updated program. $A, B, C, x, y, z$ are positive integers $x, y, z > 2$ $A^x + B^y = C^z$ $A, B, C$ have no common prime factor. (If you think you have a valid counterexample, before you share it with Andrew Beal, or me, or anyone else, you can check it with my Online Beal Counterexample Checker.) from math import gcd #### In Python versions < 3.5, use "from fractions import gcd" A, B, C = 60000000000000000000, 70000000000000000000, 82376613842809255677 x = y = z = 3. A ** x + B ** y == C ** z and gcd(gcd(A, B), C) == 1 True WOW! The result is True! Is this a real counterexample to Beal? And also a disproof of Fermat? Alas, it is not. Notice the decimal point in " 3.", indicating a floating point number, with inexact, limited precision. Change the inexact " 3." to an exact " 3" and the result changes to " False". Below we see that the two sides of the equation are the same for the first 18 digits, but differ starting with the 19th: (A ** 3 + B ** 3, C ** 3) (559000000000000000000000000000000000000000000000000000000000, 559000000000000000063037470301555182935702892172500189973733) They say "close" only counts in horseshoes and hand grenades, and if you threw two horseshoes at a stake on the planet Kapteyn-b (a possibly habitable and thus possibly horseshoe-playing exoplanet 12.8 light years from Earth) and the two paths differed in the 19th digit, the horseshoes would end up less than an inch apart. That's really, really close, but close doesn't count in number theory. Speaking of close: in two different episodes of The Simpsons, close counterexamples to Fermat's Last Theorem are shown: $1782^{12} + 1841^{12} = 1922^{12}$ and $3987^{12} + 4365^{12} = 4472^{12}$. These were designed by Simpsons writer David X. Cohen to be correct up to the precision found in most handheld calculators. Cohen found the equations with a program that must have been something like this: from itertools import combinations def simpsons(bases, powers): """Find the integers (A, B, C, n) that come closest to solving Fermat's equation, A ** n + B ** n == C ** n. Let A, B range over all pairs of bases and n over all powers.""" equations = ((A, B, iroot(A ** n + B ** n, n), n) for A, B in combinations(bases, 2) for n in powers) return min(equations, key=relative_error) def iroot(i, n): "The integer closest to the nth root of i." return int(round(i ** (1./n))) def relative_error(equation): "Error between LHS and RHS of equation, relative to RHS." (A, B, C, n) = equation LHS = A ** n + B ** n RHS = C ** n return abs(LHS - RHS) / RHS simpsons(range(1000, 2000), [11, 12, 13]) (1782, 1841, 1922, 12) simpsons(range(3000, 5000), [12]) (3987, 4365, 4472, 12) beal2.0 and 2.1¶ In October 2015 I looked back at my original program from 2000. I ported it from Python 1.5 to 3.5 (by putting parens around the argument to long = int). It runs 250 times faster today, a tribute to both computer hardware engineers and the developers of the Python interpreter. I found that I had misunderstood the problem in 2000. I thought that, by definition, $A$ and $B$ could not have a common factor, but actually, the definition of the conjecture only rules out examples where all three of $A, B, C$ share a common factor. I rewrote the program to reflect that, but then Mark Tiefenbruck (and later Edward P. Berlin and Shen Lixing) wrote to point out that my original program was actually correct, not by definition, but by derivation: if $A$ and $B$ have a commmon prime factor $p$, then the sum of $A^x + B^y$ must also have that factor $p$, and since $A^x + B^y = C^z$, then $C^z$ and hence $C$ must have the factor $p$. So I was wrong twice—I originally failed to understand the problem completely, and then I failed to recognize the optimization—and that means the original program was correct. Mark Tiefenbruck also suggested another optimization: only consider exponents that are odd primes, or 4. The idea is that a number like 512 can be expressed as either $2^9$ or $8^3$, and my program doesn't need to consider both. In general, any time we have a composite exponent, such as $b^{qp}$, where $p$ is prime, we should ignore $A=b, x=qp$, and instead consider only $A=b^q, x=p$. There's one complication to this scheme: 2 is a prime, but 2 is not a valid exponent for a Beal counterexample. So we will allow 4 as an exponent, as well as all odd primes up to max_x. Here is the complete, updated program: from math import gcd, log from itertools import combinations, product def beal(max_A, max_x): """See if any A ** x + B ** y equals some C ** z, with gcd(A, B) == 1. Consider any 1 <= A,B <= max_A and x,y <= max_x, with x,y prime or 4.""" Apowers = make_Apowers(max_A, max_x) Czroots = make_Czroots(Apowers) for (A, B) in combinations(Apowers, 2): if gcd(A, B) == 1: for (Ax, By) in product(Apowers[A], Apowers[B]): Cz = Ax + By if Cz in Czroots: C = Czroots[Cz] x, y, z = exponent(Ax, A), exponent(By, B), exponent(Cz, C) print('{} ** {} + {} ** {} == {} ** {} == {}' .format(A, x, B, y, C, z, C ** z)) def make_Apowers(max_A, max_x): "A dict of {A: [A**3, A**4, ...], ...}." exponents = exponents_upto(max_x) return {A: [A ** x for x in (exponents if (A != 1) else [3])] for A in range(1, max_A+1)} def make_Czroots(Apowers): return {Cz: C for C in Apowers for Cz in Apowers[C]} def exponents_upto(max_x): "Return all odd primes up to max_x, as well as 4." exponents = [3, 4] if max_x >= 4 else [3] if max_x == 3 else [] for x in range(5, max_x, 2): if not any(x % p == 0 for p in exponents): exponents.append(x) return exponents def exponent(Cz, C): """Recover z such that C ** z == Cz (or equivalently z = log Cz base C). For exponent(1, 1), arbitrarily choose to return 3.""" return 3 if (Cz == C == 1) else int(round(log(Cz, C))) It takes less than a second to verify that there are no counterexamples for combinations up to $100^{100}$, a computation that took Andrew Beal thousands of hours on his 1990s-era computers: %time beal(100, 100) CPU times: user 352 ms, sys: 2.2 ms, total: 354 ms Wall time: 354 ms The execution time goes up roughly with the square of max_A, so with 5 times more A values, this computation takes about 25 times longer: %time beal(500, 100) CPU times: user 10.8 s, sys: 143 ms, total: 11 s Wall time: 11.1 s bealWorks¶ The function beal first does some precomputation, creating two data structures: Apowers: a dict of the form {A: [A**3, A**4, ...]}giving the nonredundant powers (prime and 4th powers) of each base, A, from 1 to max_x. Czroots: a dict of {C**z : C}pairs, giving the zth root of each power in Apowers. Then we consider all combinations of two bases, A and B, from Apowers. Here is a very small example Apowers table: Apowers = make_Apowers(6, 10) Apowers {1: [1], 2: [8, 16, 32, 128], 3: [27, 81, 243, 2187], 4: [64, 256, 1024, 16384], 5: [125, 625, 3125, 78125], 6: [216, 1296, 7776, 279936]} Consider the combination where A is 3 and B is 6. Of course gcd(3, 6) == 3, so the program would not consider them further, but imagine if they did not share a common factor. Then we would look at all possible Ax + By sums, for Ax in [27, 81, 243, 2187] and By in [216, 1296, 7776, 279936]. One of these would be 27 + 216, which sums to 243. We look up 243 in Czroots: Czroots = make_Czroots(Apowers) print(Czroots) Czroots[243] {128: 2, 1: 1, 1296: 6, 1024: 4, 32: 2, 8: 2, 64: 4, 2187: 3, 78125: 5, 256: 4, 16384: 4, 16: 2, 81: 3, 279936: 6, 243: 3, 3125: 5, 625: 5, 216: 6, 7776: 6, 27: 3, 125: 5} 3 We see that 243 is in Czroots, with value 3, so this would be a counterexample (except for the common factor). The program uses the exponent function to recover the values of x, y, z, and prints the results. Can we gain confidence in the program? It is difficult to test beal, because the expected output is nothing, for all known inputs. One thing we can do is verify that beal finds cases like 3 ** 3 + 6 ** 3 == 3 ** 5 == 243 that would be a counterexample except for the common factor 3. We can test this by temporarily replacing the gcd function with a mock function that always reports no common factors: def gcd(a, b): return 1 beal(100, 100) Let's make sure all those expressions are true: } {True} I get nervous having an incorrect version of gcd around; let's change it back, quick! from math import gcd beal(100, 100) We can also provide some test cases for the subfunctions of beal: def tests(): assert make_Apowers(6, 10) == { 1: [1], 2: [8, 16, 32, 128], 3: [27, 81, 243, 2187], 4: [64, 256, 1024, 16384], 5: [125, 625, 3125, 78125], 6: [216, 1296, 7776, 279936]} assert make_Czroots(make_Apowers(5, 8)) == { 1: 1, 8: 2, 16: 2, 27: 3, 32: 2, 64: 4, 81: 3, 125: 5, 128: 2, 243: 3, 256: 4, 625: 5, 1024: 4, 2187: 3, 3125: 5, 16384: 4, 78125: 5} Czroots = make_Czroots(make_Apowers(100, 100)) assert 3 ** 3 + 6 ** 3 in Czroots assert 99 ** 97 in Czroots assert 101 ** 100 not in Czroots assert Czroots[99 ** 97] == 99 assert exponent(10 ** 5, 10) == 5 assert exponent(7 ** 3, 7) == 3 assert exponent(1234 ** 999, 1234) == 999 assert exponent(12345 ** 6789, 12345) == 6789 assert exponent(3 ** 10000, 3) == 10000 assert exponent(1, 1) == 3 assert exponents_upto(2) == [] assert exponents_upto(3) == [3] assert exponents_upto(4) == [3, 4] assert exponents_upto(40) == [3, 4, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37] assert exponents_upto(100) == [ 3, 4, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97] assert gcd(3, 6) == 3 assert gcd(3, 7) == 1 assert gcd(861591083269373931, 94815872265407) == 97 assert gcd(2*3*5*(7**10)*(11**12), 3*(7**5)*(11**13)*17) == 3*(7**5)*(11**12) return 'tests pass' tests() 'tests pass' The program is mostly straightforward, but relies on the correctness of these arguments: combinationswithout replacements from the table? In other words, are we sure there are no solutions of the form $A^x + A^x = C^z$? Yes, we can be sure, because then $2\;A^x = C^z$, and all the factors of $A$ would also be factors of $C$. Czrootstable? Consider that $81 = 3^4 = 9^2$. We put {81: 3}in the table and discard {81: 9}, because any number that has 9 as a factor will always have 3 as a factor as well, so 3 is all we need to know. But what if a number could be formed with two bases where neither was a multiple of the other? For example, what if $2^7 = 5^3 = s$; then wouldn't we have to have both 2 and 5 as values for $s$ in the table? Fortunately, that can never happen, because of the fundamental theorem of arithmetic. exponentfunction that was not caught by the tests? Possibly; but exponentis not used to find counterexamples, only to print them, so any such error wouldn't cause us to miss a counterexample. beal(6, 10), we are no longer testing $512$ as a value of $A$ or $B$, even though $512 = 2^9$ and both $2$ and $9$ are within range, because the program chooses to express $512$ as $8^3$, and $8$ is not in the specified range. So the program is still correctly searching for counterexamples, but the space that it searches for given max_Aand max_xis different with this optimization. Arithmetic is slow with integers that have thousands of digits. If we want to explore much further, we'll have to make the program more efficient. An obvious improvement would be to do all the arithmetic module some prime number $p$ that fits in one word. Then we know:$$\mbox{if} ~~ A^x + B^y = C^z ~~ \mbox{then} ~~ A^x (\mbox{mod} ~ p) + B^y (\mbox{mod} ~ p) = C^z \;(\mbox{mod} ~ p)$$ So we can do efficient tests modulo $p$, and then do the full arithmetic only for combinations that work modulo $p$. Unfortunately there will be collisions (two numbers that are distinct, but are equal mod $p$), so the tables will have to have lists of values. Here is a simple, unoptimized implementation: from math import gcd from itertools import combinations, product from collections import defaultdict def beal_modp(max_A, max_x, p=2**31-1): """See if any A ** x + B ** y equals some C ** z (mod p), with gcd(A, B) == 1. If so, verify that the equation works without the (mod p). Consider any 1 <= A,B <= max_A and x,y <= max_x, with x,y prime or 4.""" assert p >= max_A Apowers = make_Apowers_modp(max_A, max_x, p) Czroots = make_Czroots_modp(Apowers) for (A, B) in combinations(Apowers, 2): if gcd(A, B) == 1: for (Axp, x), (Byp, y) in product(Apowers[A], Apowers[B]): Czp = Axp + Byp if Czp in Czroots: lhs = A ** x + B ** y for (C, z) in Czroots[Czp]: if lhs == C ** z: print('{} ** {} + {} ** {} == {} ** {} == {}' .format(A, x, B, y, C, z, C ** z)) def make_Apowers_modp(max_A, max_x, p): "A dict of {A: [(A**3 (mod p), 3), (A**4 (mod p), 4), ...]}." exponents = exponents_upto(max_x) return {A: [(pow(A, x, p), x) for x in (exponents if (A != 1) else [3])] for A in range(1, max_A+1)} def make_Czroots_modp(Apowers): "A dict of {C**z (mod p): [(C, z),...]}" Czroots = defaultdict(list) for A in Apowers: for (Axp, x) in Apowers[A]: Czroots[Axp].append((A, x)) return Czroots Here we see that each entry in the Apowers table is a list of (A**x (mod p), x) pairs. For example, $6^7 = 279,936$, so in our (mod 1000) table we have the pair (936, 7) under 6. Apowers = make_Apowers_modp(6, 10, 1000) Apowers {1: [(1, 3)], 2: [(8, 3), (16, 4), (32, 5), (128, 7)], 3: [(27, 3), (81, 4), (243, 5), (187, 7)], 4: [(64, 3), (256, 4), (24, 5), (384, 7)], 5: [(125, 3), (625, 4), (125, 5), (125, 7)], 6: [(216, 3), (296, 4), (776, 5), (936, 7)]} And each item in the Czroots table is of the form {C**z (mod p): [(C, z), ...]}. For example, 936: [(6, 7)]. make_Czroots_modp(Apowers) defaultdict(list, {1: [(1, 3)], 8: [(2, 3)], 16: [(2, 4)], 24: [(4, 5)], 27: [(3, 3)], 32: [(2, 5)], 64: [(4, 3)], 81: [(3, 4)], 125: [(5, 3), (5, 5), (5, 7)], 128: [(2, 7)], 187: [(3, 7)], 216: [(6, 3)], 243: [(3, 5)], 256: [(4, 4)], 296: [(6, 4)], 384: [(4, 7)], 625: [(5, 4)], 776: [(6, 5)], 936: [(6, 7)]}) Let's run the program: %time beal_modp(500, 100) CPU times: user 9 s, sys: 145 ms, total: 9.14 s Wall time: 9.27 s This is a bit faster than the previous version, and the idea is that as we start dealing with much larger integers, this version will be even faster, relatively. I could improve this version by caching certain computations, managing the memory layout better, moving some computations out of loops, considering using multiple primes (as in a Bloom filter), finding a way to parallelize the program, and re-coding in a faster compiled language (such as C++ or Go or Julia). Then I could invest thousands (or millions) of CPU hours searching for counterexamples. But Witold Jarnicki and David Konerding already did that: they wrote a C++ program that built a table of $C^z \;(\mbox{mod} \; p)$ up to $5000^{5000}$, and, in parallel across thousands of machines, searched for $A, B$ up to 200,000 and $x, y$ up to 5,000, but found no counterexamples. On a smaller scale, Edwin P. Berlin searched all $C^z$ up to $10^{17}$ and also found nothing. So I don't think it is worthwhile to continue on that path. This was fun, but I can't recommend anyone spend a serious amount of computer time looking for counterexamples to the Beal Conjecture—the money you invest in computer time would be more than the expected value of your prize winnings. I suggest you work on a proof rather than a counterexample, or work on some other interesting problem instead!
http://nbviewer.jupyter.org/url/norvig.com/ipython/Beal.ipynb
CC-MAIN-2017-22
refinedweb
3,013
63.73
Hi Stefano, thanks for tackling this! On removal of glob_sequence pattern we can also remove is_glob function (and therefore adjust read_probe). Just writing it here as a reminder that we not forget about that. Stefano Sabatini wrote: > Allow to override the default 'glob_sequence' value, which is deprecated > in favor of the new 'glob' and 'sequence' options. > > The new pattern types should be easier on the user since they are more > predictable than 'glob_sequence', and do not require awkward escaping. > > FIXME: bump micro before pushing > --- > libavformat/img2dec.c | 34 +++++++++++++++++++++++++++++++++- > 1 files changed, 33 insertions(+), 1 deletions(-) > > diff --git a/libavformat/img2dec.c b/libavformat/img2dec.c > index 0443b1a..a108656 100644 > --- a/libavformat/img2dec.c > +++ b/libavformat/img2dec.c > @@ -41,6 +41,8 @@ > > #endif /* HAVE_GLOB */ > > +enum PatternType { PT_GLOB_SEQUENCE, PT_GLOB, PT_SEQUENCE }; > + > typedef struct { > const AVClass *class; /**< Class for private options. */ > int img_first; > @@ -54,6 +56,7 @@ typedef struct { > char *video_size; /**< Set by a private option. */ > char *framerate; /**< Set by a private option. */ > int loop; > + enum PatternType pattern_type; > int use_glob; > #if HAVE_GLOB > glob_t globstate; > @@ -233,6 +236,9 @@ static int read_header(AVFormatContext *s1) > } > > if (!s->is_pipe) { > + if (s->pattern_type == PT_GLOB_SEQUENCE) { > + av_log(s1, AV_LOG_WARNING, "Pattern type 'glob_sequence' is deprecated: " > + "use -pattern_type 'glob' or 'sequence' instead\n"); > s->use_glob = is_glob(s->path); > if (s->use_glob) { I would prefer to only print the warning if is_glob() is true to not spam users not aware of the globbing functionality. Also a more ffmpeg tool agnostic message like this could be used: "Using deprecated pattern_type 'glob_sequence' for globbing: port your pattern to 'glob' because 'glob_sequence' will be removed\n" If you want to keep the hint on how it is exposed as ffmpeg option you could still keep the spirit and reduce to say that the user needs to adapt to "-pattern_type glob" as it is unlikely the user wanted to use sequence if we got into "if (s->use_glob)" branch. > #if HAVE_GLOB > @@ -260,7 +266,9 @@ static int read_header(AVFormatContext *s1) > first_index = 0; > last_index = s->globstate.gl_pathc - 1; > #endif > - } else { > + } > + } > + if ((s->pattern_type == PT_GLOB_SEQUENCE && !s->use_glob) || s->pattern_type == PT_SEQUENCE) { > if (find_image_range(&first_index, &last_index, s->path, > s->start_number, s->start_number_range) < 0) { > av_log(s1, AV_LOG_ERROR, > @@ -268,7 +276,25 @@ static int read_header(AVFormatContext *s1) > s->path, s->start_number, s->start_number + s->start_number_range - 1); > return AVERROR(ENOENT); > } > + } else if (s->pattern_type == PT_GLOB) { > + int gerr; > + if (!HAVE_GLOB) { > + av_log(s1, AV_LOG_ERROR, "Pattern type 'glob' was selected but is not supported by libavformat\n"); nit: "Pattern type 'glob' was selected but is not supported by this libavformat build\n"); or "is not available in this libavformat build" or similar. Leave it as is or change to your liking. [...] Besides the missing #ifdef Nicolas mentioned, the patch looks fine to me. I also like this solution better then the current solution. A plus are the clear failure modes. Alexander
http://ffmpeg.org/pipermail/ffmpeg-devel/2012-August/129089.html
CC-MAIN-2014-35
refinedweb
464
53.51
FAQs Search Recent Topics Flagged Topics Hot Topics Best Topics Register / Login Win a copy of Modern JavaScript for the Impatient this week in the Server-Side JavaScript and NodeJS forum! Paul Ralph Ranch Hand 315 46 Threads 0 Cows since Aug 10, 2000 (315/100) Number Threads Started (46 (315/10) Number Threads Started (46/10) Number Likes Received (1/3) Number Likes Granted (2/3) Set bumper stickers in profile (0/1) Set signature in profile Set a watch on a thread Save thread as a bookmark Create a post with an image (0/1) Recent posts by Paul Ralph Bad Programming Practices 101: Testing Prerequisites Hi Karl! I've been looking at the book's previews and they're great. I especially love the sense of humor. I've done some programming, but I am an absolute beginner when it comes to testing. Do you have any recommendations on the best way to learn the prerequisites for the testing chapter? Thanks! show more 2 years ago Beginning Java Am I supposed to feel this stupid? Is James Gosling toying with us?! Programming is difficult show more 2 years ago Beginning Java Murach's Python Programming: new to Python, not to programming If a person already knows Java (but nothing about Python), how would you recommend that person read your book (as opposed to a pure beginner)? show more 3 years ago Jython/Python How different is it? Yeah, I certified a *long* time ago. Over a decade. I know that's way too far back to go into specifics, but how has the exam changed over time in general? Any predictions on future emphasis? show more 4 years ago Programmer Certification (OCPJP) Assignment Log Submitted OOP-2. I don't know how many years it's been since I was on the drive. show more 7 years ago Cattle Drive Simple HashMap OK, got it. Thanks! show more 7 years ago Beginning Java Simple HashMap OK, I got the basic HashMap to work. Now I'm trying to get a method to see the HashMap. NumberTable is null inside convert() and I have no idea why. (I left out some of the code that's outside of the class) import java.util.*; class Number { public static HashMap NumberTable ; static { HashMap<String, Integer> NumberTable = new HashMap<>(); // Put elements to the map NumberTable.put("one", 1); System.out.println(NumberTable.get("one")); NumberTable.put("two", 2); NumberTable.put("three", 3); NumberTable.put("four", 4); NumberTable.put("six", 6); NumberTable.put("seven", 7); NumberTable.put("eight", 8); NumberTable.put("nine", 9); NumberTable.put("ten", 10); NumberTable.put("eleven", 11); NumberTable.put("twelve", 12); NumberTable.put("thirteen", 13); NumberTable.put("fourteen", 14); NumberTable.put("fifteen", 15); NumberTable.put("sixteen", 16); NumberTable.put("seventeen", 17); NumberTable.put("eighteen", 18); NumberTable.put("nineteen", 19); NumberTable.put("twenty", 20); NumberTable.put("thirty", 30); NumberTable.put("forty", 40); NumberTable.put("fifty", 50); NumberTable.put("sixty", 60); NumberTable.put("seventy", 70); NumberTable.put("eighty", 80); NumberTable.put("ninty", 90); } Integer converted; String whole; String tens; String ones; Number() { //converted = 0; //whole = "I'm whole"; //tens = "I'm tens"; //ones = "I'm ones"; System.out.println("Number has been constructed!"); this.converted = 0; } public void convert() { //Integer FirstInt = NumberTable.get(args[0]); //Integer SecondInt = NumberTable.get(args[1]); System.out.println("Trying to convert"); System.out.print("This.whole="); System.out.println(this.whole); //System.out.println(result); System.out.println("This.converted="+this.converted); System.out.println(this.NumberTable.get("one")); this.converted = (Integer)NumberTable.get(this.whole); return; //this.converted; NumberToConvert.get(NuberTable.whole); } //void construct() //{ } Thanks show more 7 years ago Beginning Java Simple HashMap Not sure if this goes here or in the Cattle Drive. I'm trying to develop a HashMap for a Cattle Drive assignment. import java.util.*; class HashMapDemo { public static void main(String args[]) { // Create a hash map HashMap NumberTable = new HashMap(); // Put elements to the map NumberTable.put("one", new Integer(1)); NumberTable.put("two", new Integer(2)); NumberTable.put("three", new Integer(3)); NumberTable.put("four", new Integer(4)); NumberTable.put("five", new Integer(5)); Object FirstInt = NumberTable.get(args[0]); Object SecondInt = NumberTable.get(args[1]); System.out.println(FirstInt); System.out.println(SecondInt); //FirstInt = Integer(FirstInt); //SecondInt = Integer(SecondInt); //System.out.println(FirstInt * SecondInt); } } Why am I getting Objects instead of Integers? Is there a way to cast the Objects into Integers? Am I even asking the right questions? Thanks show more 7 years ago Beginning Java Trying to come back after many years OK! Java 7 is installed. Thanks for your help. show more 7 years ago Cattle Drive Trying to come back after many years)? show more 7 years ago Cattle Drive Trying to come back after many years Thanks! It's working now. show more 7 years ago Cattle Drive Trying to come back after many years I downloaded and installed jdk-6u37-windows-i586.exe, but I must be doing something wrong (like downloading the wrong thing) because my computer doesn't recognize javac. So what should I do? show more 7 years ago Cattle Drive Inactive Students It's been eight years, but I'm thinking about getting back on the horse. Has anything changed in that time? It looks like the assignments are the same. show more 10 years ago Cattle Drive jytsika sorry !! So how do you pronounce it? Does the first syllable rhyme with "hit" or "height"? Do you push the s into the first syallable or leave it in the second? Great name, I just want to get it right. Paul R show more 18 years ago Cattle Drive Want to stop terrorism? Nathan: Bravo! A well-researched and thought-out reply. Ken: You don't need to be condescending to make a thoughtful argument. Comments like "not based on reality", "ridiculous", and "Wake up!" will only make you look bad, not increase the strength of your argument. Paul R show more 18 years ago Meaningless Drivel
https://www.coderanch.com/u/2190/Paul-Ralph
CC-MAIN-2020-40
refinedweb
1,001
62.04
I am not sure of this is teh right forum to ask this question. If not, kindly direct me to the correct forum. I am trying to call a WMI command from a batch file like below: for /f "tokens=2 delims==" %%f in ('wmic /namespace:\some command /value ^| find "="') do set "val=%%f" IF "%val%"==""( echo Null. Exiting... Exit /B ) The issue is if the WMI command didn't find the query result, then I expect the query to return an empty string. The value I am looking for is a string. So the above empty string comparison does not work so I am not getting the "Exiting...." statement. What is wrong here? What more is to done in the comparison? .The value I am looking for doesn't exist and the script says "No instances found" in command prompt. But I need to inform the user in a nice way. How do one does string or empty comparisons in batch files?
https://docs.microsoft.com/en-us/answers/questions/23106/comparing-strings-in-batch-files.html
CC-MAIN-2021-17
refinedweb
163
82.24
Examining the Windows AARD Detection Code A serious message--and the code that produced it Andrew Schulman Andrew is a contributing editor to DDJ, and coauthor of the books Undocumented DOS and Undocumented Windows. Portions of this article are excerpted from Undocumented DOS, Second Edition, (Addison-Wesley, 1993). If you were one of the thousands of Windows 3.1 beta testers, and if you happened to be using DR DOS rather than MS-DOS, you probably butted heads with a seemingly innocuous, yet odd, error message like that in Figure 1. As you'll see, this message is a visible manifestation of a chunk of code whose implementation is technically slippery and evasive. While it's impossible to gauge intent, the apparent purpose of this code is to lay down arbitrary technical obstacles for DOS-workalike programs. The message appears with the release labeled "final beta release (build 61)" (dated December 20, 1991), and with "pre-release build 3.10.068" (January 21, 1992). Similar messages (with different error numbers) are produced in builds 61 and 68 by MN.COM, SETUP.EXE, and by the versions of HIMEM.SYS, SMARTDRV.EXE, and MSD.EXE (Microsoft diagnostics) packaged with Windows. Although the error is non-fatal--that is, the program can continue running--WIN.COM's default behavior is to terminate the program, rather than continue. The message first appeared in build 61, a late-stage beta, and seemed to disappear in the final retail release of Windows 3.1. However, the code that generates the message is present in the retail release, albeit in quiescent form, and executes every time you run Windows 3.1. It's significant that the message, which appeared when running on DR DOS (including Novell's "Novell DOS 7" beta), did not appear when running on MS-DOS or PC-DOS. This raises the question then, what causes the error message? As it turns out, finding the answer required substantial system-level sleuthing, an interesting challenge in its own right. In this article, I'll summarize the results of chasing this chunk of source which I call the "AARD code" (after a plain-text signature that's buried within the otherwise-encrypted code). The full technical details of the chase--the run-time disassembly and decryption of the code--is the subject of a subsequent DDJ article. The raw information is available now in electronic form; see "Availability," page 3. Here, I'll present a pseudocode summary of the AARD code, then focus on the code's effects and implications rather than on precise details of its implementation. Maybe It's a Bug? Whether in spite or because of the books Undocumented DOS and Undocumented Windows, I've often had to publicly defend Microsoft against what I felt were acts of scapegoating from whining competitors (including Novell, Borland, Lotus, and Wordperfect), complaints which remind me of the way some Americans like to blame Japan for what are ultimately our own domestic problems. In fact, much of Microsoft's practice, far from targeting competitor's applications, points in the opposite direction: Microsoft sometimes goes to extremes to maintain compatibility, even with competitor's mistakes (see, for example, the crazy GetAppCompatFlags() function discussed in Chapter 5 of Undocumented Windows). Certainly, it's true that DOS workalikes such as DR DOS have to pretend to be an older version of DOS (DOS 3.31, for instance) if they want to run Windows Enhanced mode. This is because of an undocumented interface shared by the Windows Enhanced mode DOSMGR virtual device driver (VxD) inside WIN386.EXE and MS-DOS 5 and 6. To appear as more recent versions of DOS, would-be clones must reverse-engineer and implement this undocumented protocol.). Consequently, if you didn't know how the error message in Figure 1 was generated, it's reasonable to think that it's the manifestation of yet another bug in Novell DOS. (It wouldn't be the first time company N's bug has been misinterpreted as company M's "deliberate incompatibility.") Defeating a Debugger The first step in discovering why the error message appeared under DR DOS but not MS-DOS was to examine the relevant WIN.COM code. However, the WIN.COM code that produced this message turned out to be XOR encrypted, self-modifying, and deliberately obfuscated--all in an apparent attempt to thwart disassembly. The code also tries to defeat attempts for a debugger to step through it. For example, Figure 2 shows a code fragment in which the INT 1 single-step interrupt is pointed at invalid code (the two bytes FFh FFh), which disables DEBUG. The same is done with INT 2 (nonmaskable interrupt) and INT 3 (debug breakpoint). However, since modern debuggers (I used Nu-Mega's Soft-ICE) run the debugger and debuggee in separate address spaces, the AARD code's revectoring of INTs 1-3 has no affect on the Soft-ICE debugger. In any case, these attempts to throw examination off-track are in themselves revealing. For whatever reasons, while much of it is XOR encrypted, the code contains, as plain-text, a Microsoft copyright notice and the initials "AARD" and "RSAA," perhaps the programmer's initials. A Gauntlet of Tests Figure 3 shows a pseudocode summary of the disassembled code. In essence, this code (which, remember, is part of Windows, a product sold separately from MS-DOS) checks for genuine MS-DOS or PC-DOS. As seen in Figure 3, the AARD code relies heavily on undocumented DOS functions and data structures. The undocumented INT 21h Function 52h is called to get a pointer to the DOS internal SysVars structure, popularly known as the "List of Lists." SysVars contains pointers to other DOS internal data structures, such as the current directory structure (CDS) and system file table (SFT). The AARD code checks a number of these pointers in SysVars, ensuring that none are null. Any moderately self-respecting DOS workalike should pass unscathed through this gauntlet of tests. Interestingly, however, when this code is incorporated in a device driver such as HIMEM.SYS, it fails under DR DOS 5 and 6. These versions of DR DOS do not contain a genuine CDS, and the simulated CDS is apparently not set up until after device-driver initialization time. Thus, the Windows 3.1 beta HIMEM.SYS produces a non-fatal error message under DR DOS 5 and 6. Similarly, the AARD code fails under the Windows NT beta, where the DPB pointer in SysVars is null. Finally, the code fails in an OS/2 DOS box, where the DOS version number is 10.0 or greater (for example, OS/2 2.1 masquerades as DOS 20.10). The crucial and, appropriately, most obfuscated test, however, appears at the end of the AARD test gauntlet. This test, which was unraveled by Geoff Chappell (geoffc@cix.compulink.co.uk) first checks to see whether a network redirector (such as MSCDEX) is running. If a redirector is running, the AARD code checks that DOS's default upper case-map is located in the DOS data segment. If a redirector is not running, the code checks that the pointer to the first simulated-file control block (FCB-SFT) is located on a paragraph boundary; that is, it has a 0 offset. For ease of reference, this code is repeated in Figure 4. All versions of MS-DOS pass this test; no version of DR DOS does. To test whether this interpretation of the encrypted and heavily-obfuscated code is correct, I wrote MSDETECT.C (Listing One, page 89). This program (compiled with Microsoft C) performs the same tests as the original AARD code, but without the obfuscations, and with more informative "error" messages. My MSDETECT program succeeds under all versions of MS-DOS I tested (Compaq DOS 3.31, MS-DOS 5.0, MS-DOS 6.0), yet fails under all versions of DR DOS tested (DR DOS 5.0, DR DOS 6.0, beta Novell DOS 7). If running under DR DOS with a redirector, MSDETECT fails with the message "Default case map isn't in DOS data segment!". Otherwise it fails under DR DOS with the message "First FCB-SFT not located on paragraph boundary!". A Gratuitous Gatekeeper But what does "country information" like the DOS default upper case-map have to do with a network redirector? Why does a piece of Windows care whether this mapper is located in the DOS data segment? And why should it care whether the first FCB-SFT is located on a paragraph boundary? What kind of "errors" are these, anyway? These are all reasonable questions. In fact, the address of the default upper case-map has nothing to do with the network redirector, and no other part of Windows cares about what particular form is taken by DOS's default case-map or first FCB-SFT pointers. The AARD code has no relation to the actual purpose of the five otherwise-unrelated programs into which it has been dropped. It appears to be a wholly arbitrary test, a gratuitous gatekeeper seemingly with no purpose other than to smoke out non-Microsoft versions of DOS, tagging them with an appropriately vague "error" message. Suitably, the section of the AARD code that performs this crucial test (highlighted in Figure 4) is the most heavily XOR encrypted and obfuscated. The test in Figure 4 is the critical piece of information used by Windows to determine if it is running on MS-DOS, or on a DOS "emulator." But this code seems to have no technically-valid purpose, checking instead some rather unimportant aspects of DOS. In short, you can have an otherwise perfectly workable DOS, capable of running Windows, and yet not pass this test. To see if the case-map and FCB-SFT tests serve a technically useful purpose, I used Microsoft's SYMDEB debugger to slightly alter ("denormalize") DOS's pointers to the default case-map and the FCB-SFT. As you may recall, it's possible to change a real-mode segment:offset pointer without necessarily changing what location it points to. In real mode, a single memory location can be addressed by different pointers; there are many combinations of different segment and offset values that all resolve to the same physical address and are therefore equivalent. Windows (and all other software I ran) was unaffected by my change to these pointers. As Figure 5 shows, the only software that noticed was my MSDETECT and the AARD code in WIN.COM. In other contexts (such as MSD's need to identify the operating system), it would be perfectly legitimate to walk internal DOS data structures to see that they were the same as would be expected under genuine MS-DOS. However, that WIN.COM and other programs incorporating AARD code don't make any use of the information gained in this way, other than to print the non-fatal error message, suggests a deliberate incompatibility, rather than a legitimate need to know some information about the underlying DOS. The very non-fatality of the "error" further underscores the fact that it isn't Windows's legitimate business to care whether it's running on genuine MS-DOS. If the program can continue running despite the detected "error," then how much of an error was it to begin with? It seems that the only "error" is that the user is running Windows on someone else's version of DOS. Does Beta Code Really Matter? The non-fatal error message appeared only in two widely-distributed beta builds of Windows. But since the retail version of Windows 3.1 doesn't produce it, this is just dead history, right? Not quite. Anyone with a copy of Windows 3.1 can hex dump WIN.COM (or WIN.CNF, from which WIN.COM is built during Windows setup) and see the error message (including the mention of "beta support") and the AARD and RSAA signatures. Using DEBUG, you can try your hand at unassembling the AARD code at offset 3CE2h in WIN.COM. In other words, the crazy-looking AARD code paraphrased in Figure 3 executes everytime you run Windows. The AARD code remains in Windows SETUP and in the Windows version of SMARTDRV.EXE (it appears to have been removed from HIMEM and MSD). It's perfectly natural for software to contain vestigial remnants of past implementations. For example, WIN.COM also refers to the short-lived MSDPMI utility from the Microsoft C 7.0 beta. But in the case of the AARD code, new instructions were added to the AARD portion of Windows 3.1 retail WIN.COM--instructions that weren't present in the beta. In the retail version of WIN.COM, the AARD code contains additional instructions as well as a control byte. The control byte determines whether or not the error message appears; this byte is currently 0. As shown in Figure 6, when running the retail WIN.COM under DR DOS, you can easily use DEBUG to turn on the control byte, and the message is issued just as under the beta versions. Changing the single byte at offset 16D4h in WIN.COM triggers the printing of the message, running on DR DOS or an MS-DOS in which the FCB and/or case-map pointers have been suitably denormalized. This opens the door for Microsoft to reenable this byte in the retail shipping version in the future, if it chooses. There's no indication that Microsoft plans to do so, but it remains that neither the code nor warning message were removed--and, in fact, code was added. (I wonder to what extent you can dismiss something if it's only present in a beta and not in the retail version. The sheer size of Microsoft's beta test programs are significant product releases in themselves. I'm not sure of the number of Windows 3.1 beta sites, but a Microsoft article on the earlier DOS 5.0 beta claimed over 7000 beta sites. The Windows NT beta program reportedly shipped 70,000 units to influential developers and corporate beta testers. The size of the Windows 3.1 beta program was likely somewhere in between.) So What? A non-fatal error message in a beta version--that's it? If you have an axe to grind with Microsoft, you may have expected some more nakedly robber-baronesque behavior. If this is the worst that can be found, perhaps things aren't so bad after all. However, other examples of similar behavior have surfaced, including a warranty-related error message in QuickC and Microsoft C 6.0 (discussed in Chapter 4 of Undocumented DOS, Second Edition). While it's difficult to second-guess the precise goal of the encrypted and obfuscated AARD code, its results are clear enough. Windows beta sites that used DR DOS rather than MS-DOS might have been scared into not using DR DOS. ("Doctor, every time I do this I get a non-fatal warning." "Then stop doing it.") The effect of the AARD code is to create a new and highly artificial test of DOS compatibility. The obfuscations and encryptions make it difficult to even determine what is being tested. An indication that the AARD code's obfuscation is successful is the fact that Novell's most recent version of DR DOS (that is, Novell DOS 7) fails the test, even though it is otherwise far more compatible with MS-DOS than previous versions. Microsoft's Initial Response I've presented the substance of these findings to Microsoft, at both engineering and management levels. At press time, a detailed response was not forthcoming, perhaps due to the ongoing FTC investigation. It's likely that a subsequent issue of DDJ will contain a more specific response. However, a high-level manager at Microsoft repeatedly told me that the company is "agnostic" regarding DR DOS. He added, "They [Novell] claim 100 percent compatibility, but DR DOS is full of bugs. If DR DOS has problems running Windows, Novell should fix them." The implication is that if a Windows/DR DOS user gets an error message that a Windows/MS-DOS user doesn't, then by definition it is Novell's fault and proof that DR DOS isn't "100 percent DOS compatible." The problem with this is that, as Figure 5 shows, the AARD code's test for DOS compatibility is 100 percent artificial. By Microsoft's definition, only MS-DOS or something byte-for-byte identical with MS-DOS (and therefore in violation of copyright) is "100 percent DOS compatible." As for "agnostic," this seems unlikely given the effort required to write this tricky code. Its presence in five otherwise-unrelated programs also suggests a fairly concerted effort, as it is unlikely that five so different programs are all maintained by the same person. In fact, the programs probably fall under the domain of several different product managers or divisions. Undocumented Interfaces and the Industry The AARD code once again raises the issue of undocumented interfaces in the software industry. Because it is relatively easy for competitors to be compatible with a documented interface, companies try to create artificial kingdoms by selectively documenting only parts of their product interfaces. You have to wonder if that's the case here. Whenever an application calls on undocumented DOS services and uses data structures internal to DOS, as many successful applications now do, it ties itself more closely to the MS-DOS binary--to a particular sequence of bytes--rather than to the more-general DOS standard. Furthermore, Microsoft apparently encourages this reliance on undocumented interfaces through "selectively documented" interfaces whereby Microsoft selectively allows some of its competitors and/or customers access to an interface, while denying similar access to other companies and to the rest of the developer community. There are numerous instances of this, including the XMS 3.0 specification, the Global EMM Import specification, and the LoadHi code for Windows 3.1. Much of the discussion revolving around the need for a "Chinese Wall" at Microsoft has focused on the apparent absence of any genuine wall between the applications and operating-systems groups. The AARD code suggests that, if there's a need for an application/operating-systems wall, there may be the need for one between MS-DOS and Windows too. No one would dispute that DOS and Windows have to work together smoothly, but that togetherness needs to be open. "Chinese Walls" are good engineering practice. They're what software engineering calls "firewalls": narrow and well-documented interfaces. But the standard software-engineering texts fail to mention that to properly document interfaces is also to throw them open to potential competition, and that, conversely, undocumented interfaces are a way of creating and reinforcing an enviable monopoly position. Hopefully, Microsoft's AARD code, which is not only an undocumented interface but--something new--an encrypted one, is not intended to stifle competition unfairly. Systems Rivalry and the Courts While I'm (mercifully) not an attorney, I have found that some of the legal literature surrounding the issue of "deliberate incompatibilities" to be fascinating reading. PC-centric readers may well be surprised that many of the issues surrounding Microsoft's travails with the U.S. Federal Trade Commission have been dealt with before in cases involving companies such as Eastman Kodak and IBM. "Deliberate incompatibilities" forms a fairly well-established part of antitrust law, going under monikers such as "non-price predation" and "predatory innovation." One issue in the FTC's investigation of Microsoft was the relation between two of Microsoft's operating-systems products, Windows and MS-DOS. In particular, the FTC's Bureau of Competition tried to determine whether Microsoft had "done something" to Windows to deliberately keep it from running with Novell's DR DOS, which competes with MS-DOS. Despite the relative insignificance of DR DOS in the market, this is an important question. MS-DOS and Windows are sold as separate products. While Microsoft wants to make MS-DOS a better platform for Windows, creating an artificial tie between Windows and MS-DOS for the sole purpose of hurting Novell would constitute unfair competition. The two crucial words here are "sole" and "artificial." Surely, Microsoft should be allowed to improve Windows, even in ways that might ultimately hurt DR DOS. This is a legitimate part of the competitive process, and whining about "predatory innovation" has rightly been rejected by the courts. For example, many manufacturers of so-called "plug compatibles" tried in the '70s to have the courts characterize IBM's System/360 as "predatory innovation." The courts rejected these claims (along eventually with all of U.S. v. IBM, 1969-82), thereby "effectively requiring plaintiffs to prove that the defendant's design had no redeeming virtue for consumers" (Stephen F. Ross, Principles of Antitrust Law, Foundation Press, 1993). A detailed analysis from IBM's perspective, Folded, Spindled, and Mutilated: Economic Analysis and U.S. vs. IBM by Franklin Fisher et al. (MIT Press, 1983) makes sobering reading for anyone who might think there is some kind of open-and-shut case against Microsoft. Good luck, Novell. Cases involving Eastman Kodak also have many parallels to Microsoft. In general the courts have sided with Kodak against competitors, although a recent Supreme Court case (Kodak v. Image Technical) went the other way. For a look at how Kodak has possibly created deliberate incompatibilities, see "Structural Monopoly, Technological Performance, and Predatory Innovation: Relevant Standards under Section 2 of the Sherman Act" by James W. Brock (American Business Law Journal, Fall 1983). Likewise, Antitrust Economics on Trial: A Dialogue on the New Laissez-Faire by Brock and Walter Adams (Princeton University Press, 1991) contains a useful section on "the Predation 'Problem'." An apparently important article for the FTC is "Anticompetitive Exclusion: Raising Rivals' Costs to Achieve Power over Price" by Thomas Krattenmaker and Steve Salop (Yale Law Journal, December 1986). Merely the phrase "raising rivals' costs" is a useful handle for anyone trying to ponder Microsoft's current role in the software industry. A far more interesting article is "Predatory Systems Rivalry: A Reply" by Ordover, Sykes, and Willig (Columbia Law Review, June 1983), which describes "systems rivalry" as follows: Suppose that company A manufactures a product system with two components, A1 and A2, each sold separately. Company A has monopoly power over A1, but company B competes in the market for the second component with its compatible offering, B2. Thus, consumers initially can use a product system comprised of either A1 and A2 or A1 and B2. Company A now introduces a new product system, A1' and A2', which serves roughly the same function for consumers as the old product system. Component B2, however, is incompatible with A1'. Furthermore, company A discontinues the sale of A1 or else reprices A1 substantially higher than before. As a consequence, consumers switch to the new product system and company B is driven from the market for component two. When, if ever, should the antitrust laws sanction company A for driving B out of the market? There's a clear comparison here to Windows (A1), MS-DOS (A2), and DR DOS (B2). The scenario discussed in the article, which has nothing to do with operating-system software, underscores the fact that there's really nothing new in the questions surrounding Microsoft. Normally, A driving B out of the market is what competition is all about. That's goal of competition, and should be protected. So how can you tell when this ceases to be honest competition, and becomes predation? "Predatory Systems Rivalry" provides a good summary: ...the plaintiff must bear the burden of proof on this issue. To establish the illegitimacy of R&D expenses by a preponderance of the evidence, the plaintiff would most likely need a "smoking gun"--a document or oral admission that clearly reveals the innovator's culpable state of mind at the time of the R&D decision. Alternatively, the plaintiff could prevail if the innovation involves such trivial design changes that no reasonable man could believe that it had anything but an anticompetitive purpose. Figure 1: Typical message generated by the AARD code, produced, in this case from SETUP.EXE. --A.S. Figure 2: The AARD code attempts to disable a debugger, by pointing INT 1 (single step) at invalid code (the two bytes FFh FFh). The same operation is performed with INT 2 (nonmaskable interrupt) and INT 3 (breakpoint). This disassembly is from the Windows 3.1 retail version of WIN.COM. C:\DDJ\AARD>debug \win31\win.com -u 3d0a ;;; Note that setting DS to 0; going to fiddle with intr vect table 7055:3D0A 33C0 XOR AX,AX 7055:3D0C 8ED8 MOV DS,AX ;;; ... 7055:3D12 A10400 MOV AX,[0004] ; get INT 1 offset 7055:3D15 2EA3D034 MOV CS:[34D0],AX ; save away 7055:3D19 A10600 MOV AX,[0006] ; get INT 1 segment 7055:3D1C 2EA3D234 MOV CS:[34D2],AX ; save away 7055:3D20 BBAC3F MOV BX,3FAC ; set new intr handler offset 7055:3D23 891E0400 MOV [0004],BX 7055:3D27 8C0E0600 MOV [0006],CS ; set new intr handler segment -u 3fac 6B30:3FAC FFFF ??? DI ; the new intr handler 6B30:3FAE CF IRET ; is invalid code! Figure 3: Pseudocode of AARD code, as found in WIN.COM move (and fixup) code from 2D19h to 4E0h call code at 4E0h call AARD code at 39B2h: -- see below IF (AX doesn't match 2000h) AND IF (control_byte is non-zero) ;; added in retail THEN overwrite BYTE at 4E0h to a RET instruction ; ... IF (byte at 4E0h is a RET instruction) THEN issue non-fatal error message call AARD code at 39B2h: point INT 1, 2, 3 and at invalid code to confuse debuggers call undocumented INT 21h AH=52h to get SysVars ("List of Lists") copy 30h bytes to SysVars to stack copy first 4 bytes (DPB ptr) of copy of SysVars to stack IF DOS version >= 10.0 (i.e., OS/2) THEN don't set [bp+196h], so eventually OR AX, 2000h fails ELSE check fields in SysVars to ensure non-zero: SysVars[0] -- Disk Parameter Block (DPB) SysVars[4] -- System File Table (SFT) SysVars[8] -- Clock device SysVars[12h] -- Buffers header SysVars[16h] -- Current Directory Structure (CDS) SysVars[0Ch] -- CON device SysVars[22h] -- Device driver chain (NUL device next ptr) IF no SysVars fields are zero (MS-DOS, or WIN.COM in DR DOS) THEN set [bp+196h] so that eventually OR AX, 2000h succeeds ELSE some are zero (e.g., HIMEM.SYS in DR DOS) THEN don't set [bp+196h], so eventually OR AX, 2000h fails copy code jump to copied code copy and XOR code jump to copied and XORed code ;; the following crucial part was figured out by Geoff Chappell: IF a redirector is running (INT 2Fh AX=1100h) AND IF default upper-case map (INT 21h AH=38h) in DOS data segment (undocumented INT 2Fh AX=1203h) OR IF no redirector AND IF FCB-SFT header (SysVars[1Ah]) offset == 0 THEN DOS is considered okay ELSE (e.g., WIN.COM, SMARTDRV.EXE, etc. in DR DOS) THEN clear part of [bp+196h] so eventually OR AX, 2000h fails restore previous INT 1, 2, 3 jump back to saved return address Figure 4: The crucial AARD test for DOS legitimacy. IF redirector running (INT 2Fh AX=1100) AND IF default upper-case map (INT 21h AH=38h) in DOS data segment (INT 2Fh AX=1203h) OR IF no redirector AND IF FCB-SFT header at paragraph boundary (offset == 0) THEN DOS is considered okay Figure 5: The AARD test can be made to fail simply by changing the outward form of the pointers it examines. C:\UNDOC2\CHAP1>symdeb Microsoft Symbolic Debug Utility Windows Version 3.00 (C) Copyright Microsoft Corp 1984-1990 Processor is [80386] ;;; The first FCB-SFT is stored in this configuration at 0116:0040, ;;; so "denormalize" the pointer at that location, changing it from ;;; 05E4:0000 to 05E0:0040. This points to the same exact location, ;;; but since the offset isn't zero the AARD test fails. -dd 0116:0040 0040 0116:0040 05E4:0000 -ed 0116:0040 05E0:0040 ;;; Now normalize the pointer for the default case map. I had to ;;; disassemble the code for INT 21h AH=38h to find where this is ;;; located. The pointer is stored here at 0116:12A8. Below, the ;;; pointer is changed from 0116:0CF5 to 01E5:0005. This points ;;; to the same exact location, but the segment isn't 0116 (DOS data ;;; segment) anymore, so the AARD test fails. -dd 0116:12a8 12a8 0116:12A8 0116:0CF5 -ed 0116:12A8 01E5:0005 -q C:\WINB61>win Non-Fatal error detected: error #2726 Please contact Windows 3.1 beta support Press ENTER to exit or C to continue C:\UNDOC2\CHAP1>msdetect Default case map isn't in DOS data segment! Figure 6: Enabling a single byte in the Windows 3.1 retail version of WIN.COM resurrects the AARD code's non-fatal error message under DR DOS. C:\DRDOS6>debug win.com DEBUG v1.40 Program Debugger. Copyright (c) 1985,1992 Digital Research Inc. All rights reserved CPU type is [i486 in virtual 8086 mode] -d 16d4 16d4 2271:16D0 00 -e 16d4 1 -g Non-Fatal error detected: error #2726 Please contact Windows 3.1 beta support Press ENTER to exit or C to continue Program terminated. -q _EXAMINING THE WINDOWS AARD DETECTION CODE_ by Andrew Schulman [LISTING ONE] /* MSDETECT.C -- Build program with Microsoft C: cl msdetect.c. A replication of Microsoft's MS-DOS detection code from Windows 3.1 WIN.COM, SMARTDRV.EXE, HIMEM.SYS, SETUP.EXE. The original Microsoft code (with the initials "AARD") is heavily XOR encrypted and obfuscated. Here the encryptions and obfuscations have been removed. Andrew Schulman, May 1993, 617-868-9699, 76320.302@compuserve.com. Geoff Chappell (geoffc@cix.compulink.co.uk) deciphered the original code's tests (upper case map segment, FCB-SFT) in the case where the preliminary SysVars tests fail. Some of this material is discussed in Geoff's forthcoming book, "DOS Internals" (Addison-Wesley, 1993). The page numbers below are for the first edition of "Undocumented DOS" (Addison-Wesley, 1990). The second edition will be out in September 1993. */ #include <stdlib.h> #include <stdio.h> #include <string.h> #include <dos.h> typedef int BOOL; typedef unsigned char BYTE; typedef unsigned short WORD; typedef unsigned long DWORD; typedef void far *FP; BYTE far *_dos_getsysvars(void); FP _dos_getcasemap(void); WORD _dos_getdataseg(void); BOOL _dos_isredirector(void); void fail(const char *s) { puts(s); exit(1); } main() { BYTE far *sysvars; if ((sysvars = _dos_getsysvars()) == 0) fail("INT 21h AX=5200h returns 0!"); if (_osmajor >= 0x0a) fail("DOS version >= 10; this is OS/2 (or early NT beta!)"); #define SYSVARS(ofs) (*((FP far *) &sysvars[ofs])) #define SYSVARS_TEST(ofs, msg) if (! SYSVARS(ofs)) fail(msg) // these tests will pass under almost any DOS clone SYSVARS_TEST(0, "Disk Parameter Block (DPB) pointer in SysVars is 0!"); SYSVARS_TEST(4, "System File Table (SFT) pointer in SysVars is 0!"); SYSVARS_TEST(8, "CLOCK$ device pointer in SysVars is 0!"); SYSVARS_TEST(0x12, "buffers header pointer in SysVars is 0!"); SYSVARS_TEST(0x16, "Curr Directory Struct (CDS) ptr in SysVars is 0!"); SYSVARS_TEST(0x0C, "CON device pointer in SysVars is 0!"); SYSVARS_TEST(0x22, "Device chain pointer (from NUL) in SysVars is 0!"); // the following tests fail under DR DOS 5 and 6 (and beta of Novell DOS 7) if (_dos_isredirector()) { FP casemap = _dos_getcasemap(); if (FP_SEG(casemap) != _dos_getdataseg()) fail("Default case map isn't in DOS data segment!"); printf("case map @ %Fp\n", casemap); } else { if (FP_OFF(SYSVARS(0x1A)) != 0) // see Undocumented DOS, p. 519 fail("First FCB-SFT not located on paragraph boundary!"); printf("FCB-SFT ptr @ %Fp -> %Fp\n", sysvars+0x1a, SYSVARS(0x1A)); } // if get here, everything checks out puts("All tests check out: must be MS-DOS"); return 0; } // undocumented function: see "Undocumented DOS", pp. 518-541 BYTE far *_dos_getsysvars(void) { // could initialize ES:BX to 0:0 but the MS code doesn't do this _asm mov ax, 5200h _asm int 21h _asm mov dx, es _asm mov ax, bx // ES:BX retval moved into DX:AX } // see "Microsoft MS-DOS Programmer's Reference", p. 143 // formerly undocumented: see "Undocumented DOS", p. 599 BOOL _dos_isredirector(void) { BYTE retval; _asm mov ax, 1100h _asm int 2fh _asm mov retval, al return (retval == 0xFF); } // undocumented function: see "Undocumented DOS", p. 627 WORD _dos_getdataseg(void) { _asm push ds _asm mov ax, 1203h _asm int 2fh _asm mov ax, ds _asm pop ds // retval in AX } // get a far pointer to the default case map // see "Microsoft MS-DOS Programmer's Reference", pp. 272-3 FP _dos_getcasemap(void) { BYTE country_info[34]; FP fp = (FP) country_info; _asm push ds _asm mov ax, 3800h _asm lds dx, dword ptr fp _asm int 21h _asm pop ds return *((FP far *) &country_info[18]); }
http://www.drdobbs.com/windows/examining-the-windows-aard-detection-cod/184409070?pgno=4
CC-MAIN-2015-48
refinedweb
5,499
53.71
Why most Front End Dev interviews (#JavaScript #Typescript) are Shit Kristian Ivanov Originally published at hackernoon.com on ・12 min read I am sorry I din’t censored shit, but shit pretty much === s**t and everyone would know what I mean either way. First of all a bit of a back story I am changing my job ( wohoo for me and ohno for my team). I was dev, I became a TL, we were a dev team, then we became game team (dev, interface and animation) a while ago, then I decided to quit (if you want to know why, feel free to ask, it is npt the point of the article). A friend of mine, works in a company that works the same field as my current company and wouldn’t stop bothering me until I gave their company a shot. His company however was looking for a senior game/front end dev. Considering that I have worked at my current company for nearly 6 years, I decided to give a shot, if only to see firsthand how the current market for developers was, outside of my company. Second both companies should, in my opinion, stay unnamed. Just assume they are generic companies. Third Lets call them interview problems, seen from TL and dev perspective, when a TL tries a dev interview. It may sound weird or strange, but it is a nice perspective of a guy that has made interviews with devs in a company and the same guy trying an interview as a dev in another company. Fourth I will split this into two parts. You will know why in a bit. Fourth point one – human parts The interview was first conducted with the person, which represents the dev team and their manager. Their manager, of course focused on my career, what I have done, how much (as a percentage) did my job as a TL allowed me to write code, with what IDEs did I write my code, what were my side projects, why did I wrote them in those programming languages and so on. It was pretty understandable. However, I expected more of their dev TL. He had two questions, for about of an hour of interview. Question one – have you used Gulp; and question two – what is your approach in learning new technology or new API or new framework, etc. Apparently I did good enough impression to be asked to make a test at the company. I wasn’t sure whether or not it was the psychological test, the test to be sure how much info I will be allowed to know and not leek outside, or the tech test. I went there, talk with their manager. He was a pretty nice guy with understanding, reasonable motives and so on. I found out it was the technical test that I was about to do. Which was OK, it isn’t like I was going to study for it anyway. The following are my impressions of the test. Fourth point two – test Disappointment one It is only the test. You don’t talk with devs, their team lead, an architect or anyone else. It. Is. Just. The. Test. Disappointment two – the test itself These aren’t typos or writing mistakes. Everything is written as it was in the test. The test is a three pages piece. It is spitted in Java Script, Node.js & Gulp, TypeScript. Now to the questions. Some are omitted in order to avoid repetition. Questions about JavasScript - What is a potential pitfall with using typeof bar === ‘object’ to determine if bar is an object? how can this pitfall be avoided? Answer: Welcome to 2015–6 questions on forums. It is a reliable way, but typeof bar === ‘object’ returns true. Keep it in mind. You will rarely (never) use it a t work, but it is apparently a very common JS question for interview and tests. - What is NaN? What is its type? How can you reliably test if a value is equal to NaN? Answer: Again with the obvious old questions. NaN stand for Not A Number.It’s type however is number, because JavaScript is JavaScript. If you want to test if something is NaN, JavaScript has isNan method to do so. It is not intuitive or very common to use it, but it exists. - In what order will the numbers 1–4 be logged to the console when the code below is executed? why? (function(){ console.log(1); setTimeout( function(){ console.log(2)}, 1000); setTimeout( function(){ console.log(3)}, 0 ); console.log( 4 ); })(); The answer again is old. A variant of it can be found here with nice explanations. Basically – the browser interpreting the function will go like this: I have to write 1 I have to write 2 in a while I have to write 3 in while. The fact the timeout is set with 0ms doesn’t matter. If you have used JS event base management with setTimeout(0) you will know when to use this tremendously ugly fix (don’t ever use it!). I have to write 4 The function does not return anything so the browser will output “undefined” I had to write 3 in 0ms, so the “3” is logged I had to write 2 in a 1000ms, so the “2” is logged. Your whole answer is – 1,4 undefined, 3, 2 - What will the code bellow output? Explain your answer. console.log( 0.1 + 0.2 ); console.log( 0.1 + 0.2 == 0.3 ); What do you think 0.1 + 0.2 results in JavaScript? 0.3? Hell no! JS is famous for its float arithmetic problems. 0.1 = 0.2 results in 0.30000000000000004 (with more or less zeroes here and there). So, your answers are: 0.30000000000000004 and false - What will be the output of the following code: for( var i = 0; i < 5; i++){ setTimeout( function(){ console.log( i ); }, i * 1000 ); } Well, welcome to 2016–7 generic question about JS, setTimeouts and closures. It is the first question in this article. Yes, there are other articles, that summarize what people get asked on JS interview, this one is just a bit more detailed and written from the perspective on someone who made dev interviews, and someone who just went to a dev interview. I have seen this in so many articles it literally hurts. If anyone has read something about the language at any point he/she can answer without understanding it at all. A lot better question, in my opinion, is this – Explain what a closure is and why would you use it. Give at least 2 examples (one can be a lucky guess) - What would the following code output in the console? console.log( "0 || 1 = " + (0 || 1 )); console.log( "1 || 2 = " + (1 || 2 )); console.log( " 0 && 1 = " + ( 0 && 1 )); Pretty obvious… I am including it because it didn’t had the usual “please explain why”. If it had it, I would probably write – 1, 1, 0, because that is how || and && work. - What will the following output in the console: console.log((function f(n){return ((n > 1) ? n * f(n-1) : n)})(10)); Answer: It is pretty obvious function with recursion that calls itself with a simple ternary → 10 * 9 * 8 * 7 * 6 * 5 *4 *3 * 2 *1 I was honestly getting bored at this point. - Make a function ‘sum’ that returns a sum of two values. this function should be called as these examples: console.log( sum( 5, 3 )); //returns 8 console.log( sum(5)(3));// returns 8 Yupee!!! A challenge has come before us! This is awesome! Especially after 8 generic JavaScript questions that people should be able to answer even if they are half brain dead. Unfortunately it is both a challenge and it is not. At first glance it is intimidating and it is weird, and I quite actually like it. Unfortunately I have recently read on Medium several articles about currying written by Joel Thomas. They can be seen as a working use here in “Challenge: Program without variables #javascript” and as an explanations here in “Currying in JavaScript ES6”. So the question itself wasn’t very challenging. Of course there are very few people that I know, that are familiar with currying and I know even fewer (0) that have actually used it. Joel Thomas examples and description are tremendously useful and you should read them. - What is “callback hell” and how can it be avoided? Answer: there is a whole websitecallbackhell.com dedicated to this term. To put it in layman’s terms it is when developers write in JavaScript and create a bunch of functions that are triggered by one callback after another. This creates a pretty difficult environment to test and debug the product. If you want to fix it you can do the following things: Keep your code shallow – meaning, keep it small and clean(KISS) Modularize Handle every single error – there is a concept in Node.js called the error first callback. You can read more about it here. This will be mentioned later on as well. Questions about Node.js & Gulp Those somehow manage to be even more generic. I don’t know how. - What is Gulp? Anwer: (copied from their webpage) – gulp is a toolkit for automating painful or time-consuming tasks in your development workflow, so you can stop messing around and build something. Personal answer: I have mainly seen it use as a build base. - Is Gulp base on node.js? Answer: I actually wasn’t quite sure. Is there Gulp in Node.js – Yes. Is there Gulp in other environments – Yes. According to their website – Integrations are built into all major IDEs and people are using gulp with PHP, .NET, Node.js, Java, and other platforms. I will, however, be forced to say yes, because the people that made Gulp explain it as – Use npm modules to do anything you want + over 2000 curated plugins for streaming file transformations. I haven’t actually seen it used for anything JS as well. - What are modules in Node.js? - What is the purpose of the packaje.json file? (Yes, they have written it with a typo like that) - Explain npm in Node.js I will answer the above three as the same question, since they are. First of all what are modules in Node? – they are basically the way libraries, “classes” and so on are represented in Node. Some articles for the topic – w3schools. They (modules) can be written as AMD components and CommonJS components. Amd according to wikipedia. Amd and CommonJS modules comparison from RequireJS can be found in a few sections here. Secondly the packaGe.json file – its documentation can be found here. Roughly, the package.json allows you to specify which libraries a project needs, which libraries versions the project needs and it makes your build/install easily reproducable. As for what npm is, the abbreviation stands for Nodule Package Manager. Is it self explanatory? I believe it is. It is the thing that allows you to publish and use external libraries, update them, save them and manage them for a given project. if you use Node you should take look at the npm list of commands –  install , uninstall , update , ls and its flags – -g, – save, and so on. The npm docs can be found here however the difference between global install or install with – save can be found and memorized mainly trough using it. - What is error-first callback? Answer: I have mentioned it across the callbackhell earlier. It basically means put the error as the first argument of the function, success second. Questions about TypeScript - How Do You Implement Inheritance in TypeScript? Answer: using the extend keyword. TypeScript has it, JavaScript does as well. - How to Call Base Class Constructor from Child Class in TypeScript? Answer: super() or super( arghs ) - What are Modules in TypeScript? Are you bored? I am. The questions are generic and are seen throughout similar articles and questions in StackOverflow and etc. Anwer: A somehow thorough description can be found here. Roughly – they (the modules) are an extended version of the node ones. You can use external and internal modules, effectively creating namespaces. - Which Object Oriented Terms are Supported by TypeScript? Answer: Read it like – what OOP keywords and principles does TypeScript has that JavaScript does not? (classes, interfaces, extending, types, public/private/protected variables) Of course all of those things can be emulated in JS by using object.defineProperty() (of which I am a huge fan), js .extend, jQuery or Zepto extend and fn.extend. - What is tsconfig.json file? Answer: Read it as – have you ever used TypeScript? If not here is your summary – The presence of a tsconfig. json file in a directory indicates that the directory is the root of a TypeScript project. The tsconfig. json file specifies the rootfiles and the compiler options required to compile the project. Summary That was it. I realize that there are several articles out there, Medium included, that discuss JS related questions for job interviews. I just decided to share my own recent experience, which is a bit different, since it is from the perspective of both a guy that had interviewed developers and that has been just interviewed as a developer himself. All of those questions are generic to the point that the whole test took me roughly 20 minutes (reading and writing, by hand on a piece of paper included). If you ask me, I would strongly recommend that companies and TLs and tech guys take those kind of tests, smash them and throw them not in the nearest garbage can, but the one a few blocks away, so there is no chance for someone to see them and connect them to their authors. Putting your company header on the top of each page isn’t necessary as well… Don’t get me wrong, you can use this test to get someone’s understating tested. But it is in a very narrow way and the questions being generic enough, that I have seen literally 80% of the questions in the last week either on Medium, StakOverflow or another forum, means that someone can answer them quickly without actually understanding them. So, instead I would prefer to be both interviewed by, and to interview an actual human being, instead of doing a test or read a test. It is a lot easier to talk with someone and ask him some of those questions, followed by clarifying follow up questions which you can use to test if someone actually knows what he or she talks about or he or she has just memorized it after reading it a gazillion times. It is also a nice way to find a person that has gotten an answer wrong because of some pitfall, but is a good enough developer to learn it in the future and the only reason he or she hasn’t answered correctly on the first try is because they haven’t actually used it. You can also see how the people you interview think when you ask them those questions yourself and discuss the questions with them. Even if their answer is wrong, cause by something, if their thought process is correct I am willing to give them a shot and help them out. Most of those questions fall into two categories – you have either come across it and it was a painful enough experience to remember it (because it seemed really illogical and strange at the time or just because you weren’t familiar with JS well enough at the time (0.1 + 0.2 = 0.300000004 being a pretty good example of JS weirdness, or the fact that NaN is a number)) or things that are abstract enough that you have not yet encounter. Both of those categories can be memorized for tests. Memorizing them for tests does not mean an understanding of JavaScript ant does not mean the person can apply them correctly in his work. Which is again why I prefer human to human interaction or making a project/task and discussing it, instead of writing answers on a piece of paper, that is rated a week after. By the way – the test discussed above didn’t discussed any design patterns or anything more deep than understanding basic principles of the language which in my opinion is a bit strange, considering the fact that it was designed for senior developers. This is just my opinion of course. If anyone thinks otherwise on anything I have said/written or any of my answers or explanations seem incorrect in any way I am open for discussion in the comments :) I hope this was useful for anyone or at least made anyone think about those things. And I really hope that some TL, instead of inviting them, giving them a couple sheets of paper, leaving them in a conference room (in which you can open, laptop, computer or phone and just copy answers and don’t think about it. I actually know people that have passed tests like that and have gotten decent jobs because of it) will start making interviews with people and actually talk with them to see how well they are quipped to be able to work in the team. Technical interviews that focus on having a bullshit paper test are useless, I would personally refuse to take part in any and would walk out if insisted as they serve no purpose and focus on something like that tells me the people I'm interviewing for are not very serious. Any sensible person generally should answer to the idiotic trivia bs questionnaires with the words "I don't give a fuck, and if I really need to find out I'll either run the code or google". It's good to know about the common issues of the language you're working in, but to focus on those more than just by asking the question "are you familiar with how bad the typeof operator, floating point operations etc. are in JavaScript?" or "have you heard of the good parts of JavaScript" is ridiculous.. What interviews should focus on is practical experience, trying to find out if you get along, interest and experience in RELEVANT areas around the work they'll be doing. Basically what a technical interview should consist of is more along the lines of: ... and so on. When you ask someone e.g. "how well do you know SQL" and the answer is "quite well" you should feel secure that you can hire them without giving them a quiz about it, as you can always fire them if it turns out they lied. Also the question of knowledge of a specific topic rarely even matters when dealing with suitable people as they can learn the things they don't know. You generally build a team so people with varying levels of experience and different kinds of backgrounds can help each other out, and not so everyone knows the details of all the tools you are using. More diversity is better in this as well. Oh btw the test: Is also an immediate red flag that should tell you "NEVER WORK HERE" - no sane person would ever write a function with such obviously differing return values depending a bit on the number of arguments. Holy hell that would make life working with their codebase a pain in the ass, and JavaScript is already a big enough pain in the ass without artificially making it worse. IMO they just wanted to see if anyone knows what currying is. This being said there are two things Yes to all of your points! I would a few more points about asking them to show me some code samples from their project, which they believe to be good because they have found an interesting solution to an interesting problem. And briefly discuss it with them, to get a glimpse of how they think and how they approach problems. I stayed for the test, because it is the first time somebody handed me a test for technical interview. I did it way too fast, took pictures of it and decided to write the above rant. If even on TL or someone in charge change its ways from this to something better it's worth it. About the sum function one, what is the best way of doing that in ES5? I did like this, but I don't know if it would be the most appropriate way. function sum(x) { if (arguments[1] == null) { return (function(y) { return 'With currying: ' + (x + y); }) } else { return 'Without currying: ' + (arguments[0] + arguments[1]); } } Btw, nice article! Thanks! I like that you found it useful. Your solution is pretty straightforward and descriptive. I like it. I think you're right about having to painfully learn those concepts if you've been programming seriously for at most year. Anything JS-specific on the test can be solved with a few Google searches. Senior positions should require skills in more abstract domains such as scalability and iteration. Code exams don't really quantify things like that. I would be skeptical going anywhere with a company which spends time asking you about floating point! P.S. Questions about currying & "callbacks"? It's about continuations/futures and function literals these days IMO;) I couldn't agree more. I don't think anyone who is invited as for a senior developer position and get asked about a floating point problem will go to work there. I expected at the very least some design patterns, some better stuff that come from the "new" standard or literally anything more complicated than this. If I was give a more large code example and get asked to identify potential performance issues, or to find other problems, it would have been better in my opinion. I agree with your P.S as well ;)
https://dev.to/k_ivanow/why-most-front-end-dev-interviews-javascript-typescript-are-shit-2hc
CC-MAIN-2020-16
refinedweb
3,736
72.46
I was wondering how are layer weights and biases initialized by default? E.g. if I create the linear layer torch.nn.Linear(5,100) How are weights and biases for this layer initialized by default? Linear layers are initialized with stdv = 1. / math.sqrt(self.weight.size(1)) self.weight.data.uniform_(-stdv, stdv) if self.bias is not None: self.bias.data.uniform_(-stdv, stdv) Thanks! So it depends on the layer you use? The layers are initialized in some way after creation. E.g. the conv layer is initialized like this. However, it’s a good idea to use a suitable init function for your model. Have a look at the init functions. You can apply the weight inits like this: def weights_init(m): if isinstance(m, nn.Conv2d): xavier(m.weight.data) xavier(m.bias.data) model.apply(weights_init) So it won’t throw any error if I forget to initialize some conv layers? Yes, it won’t throw any errors. Depending on your problem, training could be trickier. Is there a way to alter this code for a situation where you have nn.Conv2d layers whos bias can be on or off depending on their position in the network? e.g. you have a first Conv2d with a bias term but then a later Conv2d with no bias term. As the following will return an error: if isinstance(m, nn.Conv2d(bias=True): xavier(m.weight.data) xavier(m.bias.data) You could use a condition to check, if bias was set: if isinstance(m, nn.Conv2d): torch.nn.init.xavier_uniform_(m.weight) if m.bias: torch.nn.init.xavier_uniform_(m.bias) If I try that I get the following error when using: def weight_init(m): if isinstance(m, torch.nn.Conv2d) or isinstance(m, torch.nn.Linear): torch.nn.init.xavier_uniform_(m.weight) if m.bias: torch.nn.init.xavier_uniform_(m.bias) RuntimeError: bool value of Tensor with more than one value is ambiguous Sorry for the misleading code. It should be: if m.bias is not None: Also, xavier_uniform will fail on bias, as it has less than 2 dimensions, so that fan_in and fan_out cannot be computed. if isinstance(m, nn.Conv2d): torch.nn.init.xavier_uniform_(m.weight) if m.bias is not None: torch.nn.init.zeros_(m.bias) No no not at all, I should have been able to work that out for myself Okay that seems to be okay except that I get the following error from the .zeros AttributeError: module 'torch.nn.init' has no attribute 'zeros_' Perhaps it’s an outdated atribute? I think it was introduced in the latest release, i.e. 0.4.1. I would recommend to update to it or in case it’s not possible at the moment due to whatever reason, you could use: with torch.no_grad(): m.bias.zero_() Ahh I’m using 0.4.0 so I will update to the newest version. That code you sent works on 0.4.0 though which is great, thanks for your help as always! Hi ptrblck, I just wanted to follow up on this: If you were to use nn.Conv2d( ... ,bias = True) presumably the weight would be zeroed would it not? Because True is != None in Python language… Therefore, you must either use bias = False, or don’t insert and bias information to nn.Conv2d? Does this sound right? If you don’t want to use the bias, you should set bias=False during the instantiation of the layer. Are you somehow referring to its initialization? In my example I set the bias to zeros, if it’s available. It’s still a learnable and used parameter in case you are wondering if the bias is useless afterwards. Sorry, my mistake I understand now. I meant bias = False in my first sentence above but I was concerned that because in Python False is not None, that it would somehow try to attribute some bias initialisation to the layer even if you set it to False. But I assume False trumps the weight initialisation so that you are left with no bias, which is what you want. Sorry for the confusion. In the construction of the conv layer you pass bias as a bool value (code). If it is set to True (or anything that returns True in the line of code), self.bias will be initialized to the nn.Parameter. It depends on the condition you are using. In the scripts I’ve posted in this thread, I’ve used if isinstance(m, nn.Conv2d), so it’ll be just used for nn.Conv2d layers. You can of course add more conditions to it for other layers/parameters etc.
https://discuss.pytorch.org/t/how-are-layer-weights-and-biases-initialized-by-default/13073
CC-MAIN-2022-33
refinedweb
789
68.67
REST API testing tool Project description DBGR Dbgr [read 'ˌdiːˈbʌɡər'] is a terminal tool to test and debug HTTP APIs. It is a alternative to Postman or Insomnia. DBGR strives to give you better control over the requests that and at the same time allow you to write your own python code to process the results. Content - Installation and dependencies - Project setup - Requests - Arguments - Return value - Environment - Recursive calls - Caching - Asserts - Autocomplete and History Installation and dependencies The easiest way to install DBGR is via pypi. pip install dbgr DBGR requires Python 3.7. Also, if you want to use terminal autocompletion, you need appropriet bash version or setup your shell. For alternative ways of installation, see CONTRIBUTORS.md. Project setup To setup a project create new directory and inside create .py file with this content: from dbgr import request @request async def get_example(env, session): await session.get('') This is your first request. Next you need an environment in which you can run this. Create another file called default.ini. It has to contain only one section called [DEFAULT], otherwise you can keep it empty: [DEFAULT] Now you can execute the request with $ dbgr request get_example or shorter $ dbgr r get_example. Requests DBGR request is a function decorated with @dbgr.request. In its simplest form it accepts two arguments. First agument is the environment it was executed in. The second argument is an instance of aiohttp.ClientSession. You don't have to return or log anything from your request. The ClientSession does all logging automatically. Names By default you execute your request with $ dbgr r <function_name>. Optinally you can change the name of the request with an argument: @requst(name='different_name') async def get_example(env, session): await session.get('') And then you'll execute it with its alternative name $ dbgr r different_name. Name of a request can contain only letters, numbers and/or underscores. Names are case sensitive. DBGR automatically loads requests from all .py files in working directory. This can lead to collisions in names. Therefore you can execute the endpoint with fully qualified name including module name: $ dbgr r module:function. Module name is simply the name of the file without extension. Arguments When defining your request, you can specify any number of arguments that it will accept (besides env and session). These arguments will be filled with values specified when you call your request. If you don't provide them in the terminal, DBGR will prompt you for value. You can also define default values for some or all arguments. @request async def many_arguments(env, session, arg1, arg2, arg3='foo'): pass When you call this request from terminal, you will be prompted for all 3 arguments. For arg3 you will be offered to use default value: $ dbgr r many_arguments arg1: arg2: arg3 [default: foo]: You can provide values when you execute your request with -a or --arg: $ dbgr r many_arguments -a arg1=foo arg2: arg3 [default: foo]: $ dbgr r many_arguments -a arg1=foo -a arg3=bar arg2: Arguments mentioned in command without value are assumed to be flags and will be resolve to True: $ dbgr r request -a arg1 # arg1 == True When you call DBGR with -d or --use-defaults swith, you will be prompted only for arguments without default values: $ dbgr r many_arguments -d arg1: arg2: And finally, you can combine everything together: $ dbgr r many_arguments -d -a arg1=foo arg2: Order of precedence of arguments This is an order in which argument values are resolved: - If you provide argument using -a/ --argswitch, it will always be used. You will not be prompted. Default value is ignored. - If you use -d/ --use-defaultsswitch, dbgr will use default values when possible. You will not be prompted for arguments with default values. - You will get prompted for arguments without default values. Hitting enter without any input will result in empty string being used. - You will get prompted for arguments with default values. Hitting enter without any input will use default value. Types annotations It is possible to annotate expected types of arguments in the request definition. DBGR will try to convert the input value into desired type. You can annotate as many arguments as you want. Arguments without annotation will be passed as strings. @request async def get_comment(env, session, comment_id: int): data = session.get('/comments', params={'id': comment_id}) $dbgr r get_comment comment_id [type: int]: # Input will be converted to integer You can also combine default values with annotation. @request async def get_comment(env, session, comment_id: int=1): data = session.get('/comments', params={'id': comment_id}) $dbgr r get_comment comment_id [default: 1, type: int]: If you use default value by pressing enter without any input, DBGR will not check type and will just pass the value as it is. DBGR currently supports these type: int, float, bool, str. Every other annotation type will be ignored. Booleans are handled in a special way. Values 0, f, false, n, no (and their variants in different case) will be converted to False, everything else will be converted to True. Return value Your request can return a value. This return value will be printed to the terminal when you execute a request. It also gets returned when you implement recursive calls. This can be usefull for example for authentication. The return value also get cached when cache is used. You can use type hinting with the same limitations as with arguments. DBGR will try to convert the return value into the specified type. @request async def count_comments(env, session) -> int: resp = session.get('/comments') return len(resp.json()) Environment Environments offer you different way to specify variables for your requests. Your default environment is placed in default.ini. This is a file in ini format using ExtendedInterpolation. You can change the environment that will be used with -e/ --env switch. DBGR searched for environments in current working directory in .ini files. Name of the environment is the name of the file without suffix. Recursive calls Sometimes you might need to make a different requests before executing what you really want to do. For example to download user data, you need to login first. You can do that by using coroutine dbgr.response. It accepts at least 3 arguments - name of the request to execute as a string (you can specify module the same as in terminal), environment and session. In most cases you'll call another requests with the session and environment your function received. But you can also modify them before calling response. from dbgr import request, response @request async def login(env, session): rv = session.post('/login', data={'username': env['login']}) return await rv.json() @request async def get_comments(env, session): auth = response('login', env, session) data = session.get('/comments', headers={'Authorization': f'Bearer {auth["token"}'}) DBGR doens't detect reccursion. Be carefull not to unintentionaly cause DDoS on your (or some elses) servers. Arguments As with the terminal execution, you can provide arguments for recursive calls. Simply add them as named arguments: @request async def login(env, session, username): rv = session.post('/login', data={'username': username}) return await rv.json() @request async def get_comments(env, session): auth = response('login', env, session, username='admin@example.com') data = session.get('/comments', headers={'Authorization': f'Bearer {auth["token"}'}) You can also specify you want to use default values wherever possible with use_defaults: @request async def list_comments(env, session, page=1): rv = session.get('/comments', params={page: page}) return await rv.json() @request async def export_comments(env, session): auth = response('list_comments', env, session, use_defaults=True) Order of precedence is the same as in terminal execution. You will still get promted for arguments witch don't have any value. Caching You can mark request to be cached. All subsequent calls of the same request will be suspended and the result will be taken from cache. This is usefull for example when you work with API that requires sign-in. You usually want to call the authentication endpoint only once at the beginning and then just re-use cached value. To enable caching call @request decorator with cache argument: @request(cache='session') async def login(env, session): ... There is only one supported cache type at this moment: session. This type stores the result in memory for the time the program is running. This is not very usefull when you execute requests one by one. But in interactive mode, the value is cached until you terminate DBGR. The cache key is constructed from the request and values of all arguments. If you call cached request with different arguments, it will get executed. Cache stores only last used value. If you call call request with cache=False while you already have a result in cache, the request will get executed and new value will be stored in cache. @request(cache='session') async def login(env, session): ... @request async def list_comments(env, session): auth = response('login', env, session, cache=False) # This will result in HTTP call ... Asserts DBGR supports assertions in requests. If an assert fails, it will get reported to the terminal. @request async def create_item(env, session): rv = session.post('/comments', data={...}) assert rv.status == 201 Autocomplete and History DBGR supports autocomplete for commands and requests. You need to install and setup argcomplete according to documentation. Interactive mode supports terminal history. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/dbgr/1.0.2/
CC-MAIN-2021-39
refinedweb
1,574
58.18
Integrating WebSockets and JMS with CDI Events in Java EE 7 By BrunoBorges-Oracle :-) Nice article! About the session bean; the @LocalBean annotation is not needed, this is already the default if no business interface is present. @LocalBean is only needed if you want both a no-interface view and either a local- or remote view. Just remove the annotation and test your code again; you'll see it makes no difference ;) Likewise, @Named does not make a bean part of a CDI context. Every bean in a CDI archive is already part of it. @Named is only needed if you want to refer to the bean via EL. You're not doing that here, so you can safely drop it as well. Posted by guest on May 03, 2013 at 01:55 AM PDT # The idea of using CDI events to process JMS messages was first introduced by Seam3 JMS module. Regarding the comments by guest from May 3, you should really use @ApplicationScoped to indicate the global nature of these objects. Right now they're @Dependent (technically if you read the CDI 1.1 spec these objects would not be registered since they have no scope). You make claims at the end of the article regarding clustered topics. I have no idea what a clustered topic is, we on the JMS EG spoke about it briefly however when the PAAS options for EE7 were dropped we dropped that discussion. I believe the use of a clustered topic will end up being platform specific. Posted by John A on May 05, 2013 at 06:31 PM PDT # @guest, your suggestions were already applied to the example project at github. And you are correct, most of my recommendations about CDI are not necessary. Thanks! @John, I was never a Seam3 developer, but this is very informative. Now, @ApplicationScoped is not really an issue here. WebSockets and CDI were not quite well aligned in the specification. There are some gaps (see GLASSFISH-20371 and JMS_SPEC-100) that will be fulfilled, hopefully, in the next version. Regardling my conclusion: what I did want to say is that with a clustered application (not a clustered topic), it will be possible to publish async data across all websocket clients, using a JMS Topic (which then, might vary per vendor, clustered/distribuited/whatever). My use case is this: an application deployed in a cluster, where users are connected to specific nodes (stick sessions) to a server endpoint websocket. The only way I can see to an application push data to all client endpoints, is by using a JMS Topic, and a Subscriber MDB that will, through CDI event, forward the data to clients. Is it better now? Posted by Bruno Borges on May 05, 2013 at 07:03 PM PDT # Is using this,we can implement video conferencing through web? Could you give some suggestion for implementing video conferencing through web using java, without using JMF? Posted by guest on May 08, 2013 at 08:24 AM PDT # Hi Bruno Yes, I agree, WebSocket didn't align much with any spec. Note that in response to the JMS spec issue you mentioned, I created to try to resolve the underlying problem - the WebSocket spec doesn't indicate active contexts. It's not the JMS spec's job to do this :-) Posted by John D. Ament on May 15, 2013 at 05:54 PM PDT # interrsting article Posted by guest on May 17, 2013 at 07:58 AM PDT # Hi, I am using java EE 7 with the new netbeans and I got some problems with the CDI on the ServerEndpoint Class. The inject of the session bean and the listening for the CDI event does not work. Does it work for you on the latest GlassFish Thanks Posted by Edwin Biemond on June 16, 2013 at 11:46 AM PDT # Hi, Does this example works on the latest netbeans with glassfish 4. My CDI bean inject and CDI events is not working on the websockets class. In my other beans and MDB CDI it working perfectly. Thanks Posted by Edwin Biemond on June 16, 2013 at 11:50 AM PDT # Hi Bruno, I am trying to see this example working on Glasfish 4.0 (b89). No luck on the Constructor-type Inject and CDI event Observer. Could you suggest any approach for these? Thanks Posted by guest on June 28, 2013 at 07:53 AM PDT # Hi, Thanks for a nice article, it is a good demonstration how websockets session can be scaled! Two points related to recent glassfish4: 1. To make source code work you will need to add jms/myQueue to JMS destinations through glassfish admin or change mapped name to "java:app/jms/myQueue" 2. senderBean in WebSocketEndpoint is not being injected (it is null). To make this work I made a WebSocketEndpoint as stateless EJB (added @Stateless). Posted by Dmytro Polivenok on July 23, 2013 at 08:56 AM PDT # Bruno, please do not assume websockets is only for browsers. Java clients need a good, robust connection to the java containers as much as browsers do. I will never be able to justify asking my internal users to try to get their work done with something as clumsy and crude as a browser interface. There is a lot of room for enterprise desktop Java applications but connectivity of Netbeans Platform Java clients with Java servers is a dismal affair currently. Websockets could put the fix on that! Posted by guest on September 07, 2013 at 07:02 PM PDT # I am trying to get the above code running, using Glassfish v4, and java ee 7, I am keep getting the following: error Exception while loading the app : EJB Container initialization error com.sun.appserv.connectors.internal.api.ConnectorRuntimeException: Invalid destination jms/myQueue for MDB: JNDI name not found Any idea? Posted by guest on December 07, 2013 at 05:57 PM PST # In case of "EJB Container initialization error com.sun.appserv.connectors.internal.api.ConnectorRuntimeException: Invalid destination jms/myQueue for MDB: JNDI name not found" try to add jms/myQueue to JMS destinations through glassfish admin or change mapped name to "java:app/jms/myQueue". Posted by Dmytro Polivenok on December 08, 2013 at 07:53 AM PST # Hai, The JMS 2.0 specificationtells that @OnMessage() fails ,because there is no request scope.(). Example @ServerEndpoint("/websocket") public class SampleWebSocket { @Resource(lookup="java:comp/DefaultJMSConnectionFactory") ConnectionFactory cf; @Resource(lookup = "jms/myQueue") Queue myQueue; @Inject private JMSContext jmsContext; @OnOpen public void onOpen(final Session session) { // works OK as there is a valid request scope here context.createProducer().send(myQueue, message); } @OnMessage public void onMessage(final String message, final Session client) { // fails as there is no valid request scope here context.createProducer().send(myQueue, message); } @OnClose public void onClose(final Session session) { // fails as there is no valid request scope here context.createProducer().send(myQueue, message); } So my doubt is how the solution described above will work,when JMSContext is created within @OnMessage() call. Posted by guest on January 06, 2014 at 08:00 AM PST # Thanks for the interesting post. The point "ability to scale server-pushed communication to WebSocket client sessions across a cluster using JMS Topics" could be a bit problematic as the the way how topics are delivered in a cluster is currently (JMS 2.0) not defined. (see:) The same problem arises with the given implementation in a clustered environment: You can not be sure, which MDB on which instance of the cluster receives the message. As WebSocket connections are physical, there is no way of sharing them between instances once they are established. So for now this is more a single instance solution. Posted by Thomas Schütt on April 09, 2014 at 10:37 PM PDT # It doesn't work on glassfish4 / jee7. Anybody has it working?, How did you solve? Posted by Ricardo Ascencio on August 19, 2014 at 03:25 PM PDT # Interesting post. Events on CDI are nice. There might be a problem with the onJMSMessage inside the Endpoint: there's a different instance of the Endpoint for each websocket connection so the CDI event would trigger a call to each one of them resulting on message overflowing. This is easly fixed by declaring onJMSMessage to be static. Posted by guest on October 28, 2014 at 01:47 PM PDT # Hey everyone, First of all wanted to give a shout out to the author Bruno -- this is a great tutorial and I'm glad I happened across it. Has solved some issues I was facing and is basically going to be the core engine behind a website I am developing. With that said, there were a few things here and there I found to make the project a little out of date and not work entirely correctly. After a little finagling, I managed to get it working with Netbeans 8.0.1 and Glassfish 4.1. If you'd like to check it out, find it on my github here: I'm still somewhat new to JMS/CDI, so some additional tips for anyone trying to get this to work: ===== You'll need to setup your glassfish server add the destination queue. To do this, start your server instance and navigate in a browser to. There, click on Resources > JMS Resources > Destination Resources. Once there, click new and create a new destination with the following settings (note: this is specific to the project I've linked above): JNDI Name: jms/msgQueue Physical Destination Name: msgQueue Resource Type: javax.jms.Queue * You may also (probably) need to download and add cdi-api-1.2.jar to the project ===== Hopefully someone out there finds this to be useful. Posted by Steve P on November 19, 2014 at 12:59 PM PST #
https://blogs.oracle.com/brunoborges/entry/integrating_websockets_and_jms_with
CC-MAIN-2016-07
refinedweb
1,635
60.95
Creating Unit Tests In Android, unit testing is based on JUnit, and plane use of JUnit is enough to test the features that are exclusively based on Java code. However, to test Android-specific functionality you need a bunch of wrapper classes built on top of JUnit. IntelliJ IDEA streamlines most of the tasks around the build of an Android test project. 1. Make sure your code is testable Unit testing requires that the source code is composed in such a way that dependencies between modules can be easily neutralized with mocks. In addition, unit testing requires that functions are well isolated from each other. As is, the code of the HelloDroid class is not easy to test. Let's first apply a quick refactoring before we proceed with unit tests. - Open the HelloDroidclass and select the portion of the code in the TapDroidmethod that refers to the production of the display message: - Rewrite the TapDroidmethod in such a way so that it calls into a newly created public helper method ( GetStringFor Display) as shown below: private void TapDroid() { counter++; String temp = getStringForDisplay(counter); message.setText(String.format("You touched the droid %s", temp)); } public String getStringForDisplay(int count) { String temp; switch(count) { case 1: temp = "once"; break; case 2: temp = "twice"; break; default: temp = String.format("%d times", count); } return temp; } The getStringForDisplay method is now much easier to test, and the body of the TapDroid method has been greatly simplified. 2. Create a test module Now let's create a new test module and set HelloDroid as the tested module. This ensures that the test module holds a reference onto the module that contains the source code you are going to test. - From the main menu, select File | New | Module to launch the New Module wizard. - On the first page of the wizard, select Android in the left pane, and Test Module on the right: - On the second page, specify the new module name, for example, Tests. Leave the other fields unchanged. The HelloDroidmodule is specified as the tested module automatically, as at this point, this is the only module in the project. A new node will be appended to the project named Tests. This module has its own manifest file and and src directory. The manifest file links against the android.test library in order to build test classes. The newly created module has a test file named HelloDroidTest in the src folder. You can add more test files simply by adding more Java classes as shown below: public class HelloDroidTest extends ActivityInstrumentationTestCase2<HelloDroid> { public HelloDroidTest() { super("com.example.HelloDroid", HelloDroid.class); } } The test class inherits from ActivityInstrumentationTestCase2<T> where T is the name of the activity you are going to test. Note that adding a constructor is required, as there is no default constructor defined for the parent class. 3. Add a test method In the editor, right-click the HelloDroid test class and click Generate (alternatively, click Alt+Insert). From the popup menu that opens, select Test Method: IntelliJ IDEA creates a new method stub named testName where you can easily change the Name suffix into something more meaningful in the context: The test prefix in the method name is required if you are using JUnit 3, the default testing framework in Android. With JUnit 4, you have to use method name annotations to indicate that a given method must be processed as a test method. 4. Write the logic for a test method Internally, the test method first gets a reference to the activity it is trying to test, then it calls the target method, and, finally, compares the effective results with the expected results. Add the following code to the test method: public void testStringForDisplay() throws Exception { int count = 1; HelloDroid activity = getActivity(); String result = activity.getStringForDisplay(count); Assert.assertEquals(result, "once"); } Assertions are implemented through the services of the JUnit framework and need to be properly referenced in the source file. Press Alt+Enter when the intention action pops up to reference it: 5. Create a run/debug configuration for tests In order to run tests, you need to create a dedicated run/debug configuration. A default configuration is created for you automatically when you set up a test module. To edit its settings, in the main menu select Run | Edit Configurations and select Tests under Android Tests in the left pane: You can select to run all tests in the module, or limit the test to the methods in a given class. 6. Run a test To run your tests, make sure the appropriate run/debug configuration is selected in the drop-down list in the top-right corner of the editor, and click the Run button next to it: Test results are displayed in the Test Runner tab of the Run tool window that is activated automatically. If a test is completed successfully, a green square icon appears in the top right corner of the editor. If there are warnings, the icon is yellow, and if a test fails - it is red. You can click the icon to get more details. You can export a test report to a variety of formats by clicking the Export Test Results icon in the Tests tab toolbar.
https://www.jetbrains.com/help/idea/2017.2/creating-unit-tests.html
CC-MAIN-2018-39
refinedweb
874
58.01
Introduction: How to Write a Simple C++ Program In this tutorial, you will learn how to write a program in the C++ programming language. This will go over 10 steps that will explain a simple C++ program. The topics that each step cover in this tutorial are often used in more complex C++ programs. This is only a simple tutorial designed for new C++ programmers and consequently only covers some of the basic topics in C++. C++ programming is one of the most popular languages and consequently, one of the most useful to know. The following code is the premise for which the tutorial is written on: // This is a line comment /*This is a multiline comment */ //This has libraries to use input and output #include //This has libraries for string operations #include //This has standard c libraries #include //This includes time libraries for c #include int main() { std::cout << "Do you want to guess a number from 0-9?(yes or no)" << std::endl; // prints to the console std::string response; // declares a string variable named response getline(std::cin, response); //gets a line from std::cin and stores it in response std::srand(std::time(0)); // needed once per program run int random_number = std::rand() % 10; // gets a random number from 0-9 int guess = -1; int number_of_guesses = 0; std::cout << random_number << std::endl; if (response.compare("yes") == 0) { while (guess != random_number) { std::cin >> guess; number_of_guesses = number_of_guesses + 1; } } else { std::cout << "You're no fun!" << std::endl; } if (guess != -1) { std::cout << "Number: " << random_number << std::endl; std::cout << "Number of Guesses: " << number_of_guesses << std::endl; std::cout << "Congratulations you guessed the number!" << std::endl; } system("pause"); return 0; } Step 1: Download and Install an IDE The first step to developing your C++ program will be to download an IDE (Integrated Developing Environment). An IDE often includes a compiler, text editor, and often includes a debugger. Using IDEs makes programming simpler. Three IDEs that I have used and would recommend are the following; Dev C++ Visual Studio Eclipse Here's a link to install DevC++ if you choose to use it: Step 2: Commenting Though comments in programming don't change how the code works, it is important in communicating what a program does to future developers. Commenting is more important in larger programs, but is also good to use for smaller programs to develop good habits. There are two basic ways to comment. The first is the line comment. Any line that starts with \\ is a comment. Also any code between /* and */ are comments. This is shown in the picture corresponding to this step. Step 3: #include Directives After comments, #include statements are written. These lines allow us to specify libraries, or to use code we have written in other files. In the example program, we include a library to use C++ input and output streams, a library to be able to use strings, the c standard library, and a time library. These libraries will enable us to use more operations further on in the program. Step 4: Main Function The main function line will be in almost any program you will write. This is where the program will start to run. The main function often is written in the form int main((int argc, char **argv)). This would allow us to pass arguments to our main function, but can be ignored for this program. Step 5: Variables and Variable Types In C++, depending on what type of data is being dealt with, different data types might be necessary. The data type used in the picture on this page shows two variables, guess and number_of_guesses, both of type int. They can hold any integer value as their name indicates. There are different other types of variables. Other basic kind of variables include float, double and char. A char can hold a single character, while a float and double can hold decimal values. An example of a char would be the character 'c'. A value that a float or double could store could be the value 1.5. The example program for this tutorial, in addition to using int, uses the type Std::string. This type can hold a sequence of characters. In the example the value -1 is stored in guess and 0 is stores in number_of_guesses. Step 6: Printing to Console In C++, text can be printed to the console by sending data to std::cout. This can include basic data types. The std::endl adds a new line to the output. This is the C++ way to do this. C++ supports most functionality from C. This includes the printf function. Instead of the following code, it could instead be written as the following: printf("Number: %d \n",random_number); printf("Number of Guesses: %d \n",number_of_guesses); printf("Congratulations you guessed the number! \n"); In the printf function, the text entered between the quotes is the text displayed. After the quotes and the comma, the variables printed out are listed. They are printed out in order and must correspond to a %d, %c or other sequence starting with the percent sign. The \n character displays a new line. Step 7: Reading From the Console In C++, text can be read from the console by sending data from std::cin and storing in a variable. The console waits until user input when the std::cin function is called. After the user types something in, the program will attempt to store it in guess. In this example, no error checking is done, so if something other than an integer were typed in, the program would likely crash. Step 8: Arithmetic Operations and Assignment Operator The assignment operator (= sign) assigns the value from the right side of the equals to the variable on the left side of the equals. For this to work properly, the left side must be a variable. Arithmetic operations allow mathematical operations to be performed on numbers. There are many operators that can be used to operate on numbers. They include addition (+ sign), subtraction ( - sign), multiplication (x sign) among others. In the line of code, the number_of_guesses gets assigned its previous value plus 1. Step 9: Conditional (if) Statements Conditional statements (if statements) change what code runs next depending on what is inside the parenthesis next to an if statement. First the inside of the parenthesis is evaluated. In this instance, if response.compare ( a function from the string library) return 0, the code following it is executed. The compare function returns 0 when the string calling it (response in this instance) is equivalent. Note that strings and basic types use different comparisons. If response is anything other than "yes", "You're no fun!" will be printed to the console. Case does matter. Step 10: Loops Loops are almost always used with if statements and run until a certain condition is met. Inside the parenthesis next to the while loop is essentially an if statement. If the statement is true, the program runs until the closing bracket of the while loop and the condition is evaluated again and if it is true, the program runs until the closing bracket of the while loop. This cycle continues until the condition (guess != random_number) isn't true anymore. This loop runs until guess doesn't equal the random_number variable. Step 11: Final Thoughts This tutorial left out a lot basics essential to programming in C++, but hopefully was useful in providing an example for some basic C++ programming. If you are more serious about programming, many other websites offer tutorials and can be found through a search through google or another search engine. 12 Comments Question 1 year ago on Step 6 What does std::endl means? Tip 2 years ago on Step 9 I made it Question 2 years ago Hey guys i need help solving this please 1. A program is required to compute the volume of a cube given by the formula; V = L*B*H*. Design this program using an algorithm of your choice; and convert the algorithm into a C++ Program 2. Ushirika Society pays 5% interest on shares exceeding Ksh 100,000 and 3% on shares that do not meet the target. However no interest is paid on deposits in the members bank account. Design a flowchart and a C++ source program that can prompt, calculate and display the shares, deposits, interests and total savings on the screen for a member 3. Kambo Company pays employee gratuity on retirement that can be computed using the formula: Gratuity = (Salary x Fixed Rate x Number of Years Worked) + One Month Salary. Formulate an Algorithm that can be used to calculate the gratuity and write a C++ program that can be used for the computation Question 3 years ago on Step 3 In which condition #imclude is used? Question 4 years ago on Step 4 what is the procedure to download an IDE? Question 4 years ago on Step 11 What is mean by function should return a value 4 years ago I need Help, please help...Write a program that will upper limit in the form of integer numbers. / sum all those numbers between upper limit and lower limit (including the lower and upper limits) which are NOT multiple of 4. the remaining integer numbers up until the upper limit is reached. aggregate sum of all numbers for the given range. Make sure that lower limit entered by the user should be greater than zero. Also the upper limit value entered by the user should be greater than the lower limit value. 6 years ago Great article here is another series for beginners C++ 7 years ago on Introduction Write a c++ program for hostel administrator. Assume that there r 50 rooms in a hostel ,room and student. Student will store registration number , batch and name of a student. Room will comprise of room no, status (free or reserved) and student. Hostel will use an array size 50 of room objects. Include necessary constructors and member functions to enter and display data in each class. Program will provide following menu options to the user. -Reserve a room -Cancel room reservation. -Display information of reserved room. -Display information of free rooms. -Search by reg no. -Search by student name. -Exit- 7 years ago What IDE did you use? I use Code::Blocks, and I had to make some major modifications to your source code. They were relatively quick, though, so props to you. 7 years ago on Introduction Write the C++ language program that should fulfill the following requirements: Program should contain two classes as “Student” and “Instructor”. Within main() function, create the objects of these two classes. Display your student ID and name by passing as attributes(student ID, name) while instantiating the object of the “Student” class. Using the object of the Instructor Class, call its member function Input() to take input for the height of the 8 instructors then call its member function print() to display the average height of the instructors. Declare data members by private and member functions by public access specifier for both classes. 8 years ago on Introduction This is a good instructable, but I would suggest refactoring a lot of this into separate functions, to illustrate good separation of responsibility, and to demonstrate functions. { { } } { } e.g. something like the following int main() { bool userWantsToPlay = GetUserWantsToPlay(); if (userWantsToPlay) SeedRandomNumberGenerator(); int randomNumber = GetRandomNumber(); int numberOfGuesses = 0; int userGuess = -1; while (userGuess != randomNumber) userGuess = GetUserGuess(); numberOfGuesses++; DisplayResults(randomNumber, numberOfGuesses); bool GetUserWantsToPlay() std::cout << "Do you want to guess a number from 0-9?(yes or no)" << std::endl; // prints to the console std::string response; // declares a string variable named response getline(std::cin, response); //gets a line from std::cin and stores it in response return (response.compare("yes") == 0) etc
https://www.instructables.com/How-to-Write-a-Simple-C-Program/
CC-MAIN-2022-21
refinedweb
1,969
63.8
This happens to be the first PDC for me. This year a lot of new technologies has been unveiled during the keynotes. One of the most focused is the Windows Azure. This happens to be the one stop shop for building and deploying your service oriented application from local to remote. We all know how slow it is to build service oriented application these days and Windows Azure is just the right tool that can make life easier over the course of time. There is a breakout session by Steve Mark that you might like to checkout if you have missed it. The session is a starting point for setting up your first Windows Azure app and it also shows a way to work with Asp.net MVC for things like uploading photos. Also, I found that you can use Azure for building async based queue service easily with asp.net. Secondly , with C# 4.0 one of the cool feature that I found is the dynamic keyword. Let's say you have the following simple class public class Math { public int Add (int x, int y) { return x + y; } public float Add (float x, float y) { return x + y; } } Add. One is with int and another with float. With the current C# 3.5 (for non generic classes) you need to do Math math = new Math(); Type mType = math.GetType(); object result = mType.InvokeMember("Add", BindingFlags.InvokeMethod, null, math, new object[] {10, 20}); object resultFloat = mType.InvokeMember("Add", BindingFlags.InvokeMethod, null, math, new object[] { 10.5f, 20f }); Console.WriteLine(result); Console.WriteLine(resultFloat); Which of course the only way to call float and int overload depending on value type with existing solution. But with C# 4.0 you can also do the following dynamic math= new Math(); Console.WriteLine (math.Add (10, 20)); Console.WriteLine (math.Add (10.5f, 20f)); As we can see that no invoke method or generics is ever required to call overloads with different type.You can find more interesting stuffs on C# 4.0 from the cool session by Anders Hejlsberg, if you at any chance missed it. PDC 2008 is the event for windows 7. It is supplied to all the attendees with the 160GB mobile hard-drive and of course as a DVC copy. The most cool thing is to be able to add vhds as logical drives and being able to set them as primary drive to boot in native mode. Most of the feature shown in the PDC keynotes are by default in protected mode, so you might need to do tweaks for it. Finally, "M" is the language for defining your domain model and "Quadrant" is a nice tool that harnesses the "M" to design and persist complex domain model visually. Together they are the "Oslo" project. Try the three parts session from microsoftPdc dot com. In the end, with Microsoft Surface setup in different places of the convention center with people participating in "Scavenger Hunt" challenge, cool product showcase by various players at Microsoft ecosystem and PDC dinner at Universal Studios really made things rocking. May be next PDC will be even more exciting with beta products coming out to life ;).
http://weblogs.asp.net/mehfuzh/archive/2008/11/09/retrospective-of-pdc-2008-from-my-view.aspx
crawl-003
refinedweb
533
73.68
This is the case where /cit/computer.yml contains the "Computer Name:" which has the value of the i5 TCP Line name instead of the DNS hostname. Can this name just be replaced by the DNS hostname instead? Concern is if later updating the scanner code will it again use the TCP Line name instead of the DNS hostname. How to make it always use the DNS host name? Answer by SilviuGligore (9653) | Jul 18, 2017 at 04:40 AM To address this scenario, do the following: 1) Decommission the existing computer from LMT UI using the button "Decomission Computer" on the Reports->Computers-> (i5 computer name) 2) Update the name in computer.yml file as desired, generate the new package and import it on LMT server side. The settings in computer.yml will be overwritten when running again the setup script, so when this is needed (for example scanner upgrade) the computer.yml file can be backed up and then replaced/merged with new file to keep the original changes. 113 people are following this question. ILMT 9.2 - Websphere Application Liberty ND 2 Answers Upgrading to ILMT 9.2.10.1 - Prequisites for "Install or Upgrade Scanner" on AIX systems 2 Answers IBM I-series software scan stops responding and no results 1 Answer Hide IP address on the Hardware Inventory report 2 Answers I need to know how to confirm iSeries software in the ILMT tool. It will not let me reassign the following to the OS: 1 Answer
https://developer.ibm.com/answers/questions/387324/can-we-rename-the-iseries-computer-name-that-is-in.html
CC-MAIN-2019-35
refinedweb
252
62.58
I am reading the new book, Windows Internals 5 about memory management (chapter 9 in Physical Address Extension PAE section). It is mentioned "A 32-bit system represents physical addresses internally with 24 bits" on page 770. I am totally confused, and I think x86 physical memory should be also addressed by 32-bit, and it is why the system is called 32-bit system. Any ideas what is wrong? Thanks in advance, George I think the confusion here may be as a result of physical addressing vs virtual addressing. On 32 bit Windows, a virtual memory address (i.e. that which is used by applications and other higher level processes) is 32 bit. Physical memory addresses are down at the level of talking directly to the hardware, which is (1) forbidden for pretty much anything outside of the kernel/HAL, and (2) an implementation detail, so what goes on there could certainly appear weird in more instances than just this. Clearly incorrect. For an accurate treatment of PAE, check out Wikipedia: link text I haven't read that exact passage in the book, but I think I can tell what it's getting at. Memory is broken up by the processor into "pages." Each page is, on an x86 or x64 processor 4K in size. (Pages can actually be larger in some cases, but let's leave out that complexity right now.) The operating system builds page tables which translate virtual addresses into physical addresses. The processor fetches these page table entries as it executes code so that the software can operate using just virtual addresses. The virtual addresses are the same as the physical address, for the lower 12 bits of the address, since 4K is 2 to the 12th power. In an x86 processor with PAE mode turned on, the physical address size is 36 bits. (This was later extended a little bit, and it can be 37 or 38 bits on some machines.) If you subtract the 12 bits that are not specified in the page table entry (because they are the same in the virtual and physical addresses) you are left with 24 bits. Those upper 24 bits of physical address in the page table entry replace the upper bits of the virtual address to make the actual physical address used by the processor. Note that the virtual address don't have 36 bits. It only has 32 bits. So those upper 24 bits replace the upper 20 bits of virtual address. This points out that, while PAE mode allows the machine to have more than 4GB of memory (4GB is 2 to the 32nd power) no single process can have more than 4GB of virtual memory. Perhaps this is referring to physical page numbers? With a 36 bit physical address bus, which is I think the maximum for a 32-bit x86 processor, and 4K pages, you'd need 24 bits to uniquely identify each physical page. #include <stdio.h> int main() { int total = 5; int *ptr = &total; /*Set ptr to the address of total*/ printf("%d",ptr); return 0; } Output: 2293620 Output: 2293620 Which is 0x22FF74, and that's 24 bits. Of course, I'm still hopelessly ignorant when it comes to Windows' memory, so I don't know what that means. :D The author should really kill himself against a wall. "represents physical addresses internally with 24 bits" means that there're only 16Mb available for addressing, so it would have been the maximum RAM available :))) The quote says nothing 'bout page addressing (like Stephen Veiss said), so this is not true. By posting your answer, you agree to the privacy policy and terms of service. asked 5 years ago viewed 442 times active 3 years ago
http://serverfault.com/questions/67074/windows-x86-physical-memory-is-24-bit
CC-MAIN-2015-32
refinedweb
625
60.04
Django Django is an open source Web development framework for the Python language that aims to automate as many processes as possible, allowing you to focus on developing software without worrying about reinventing the wheel. The framework is designed to be loosely coupled and tightly cohesive, meaning that different parts of the framework, while connected to one another, are not dependent on one another. This independence means you can use only the parts of Django you need, without worrying about dependency issues. Django makes writing Web applications faster, and it drastically cuts down the amount of code required, making it much easier to maintain the application going forward. Django strictly observes the Don't Repeat Yourself (DRY) principle, whereby every distinct piece of code or data lives in only one place. This means that when a change needs to be made, it only needs to be made in one place, leading to the process of changing software becoming much faster and much easier. Django was developed by a team of Web developers at the Lawrence Journal-World newspaper in 2003. Under pressure to release applications and enhancements under severe time constraints, they decided to create a Web framework that would save them time, allowing them to meet their difficult deadlines. The team released the framework as open source software in July 2005, and it is now developed by a community of thousands of developers across the world. The Django framework is released under the Berkeley Software Distribution (BSD) open source license, which permits redistribution and reuse of the source code and binary, with or without modification, so long as the copyright notice, license conditions, and disclaimer are retained in the redistributed package. These items must also be present in the redistributed software's documentation and supplementary materials where applicable. And the license specifies that neither the Django name nor the names of Django contributors can be used to endorse or promote derivative products without express written permission. Setting up a basic Django development environment Fortunately, installing Django is straightforward, so setting up a development environment is quick and easy. Django is written entirely in Python, so to install Django, you first need to install Python. If you're using Mac OS X or Linux®, it's likely that Python is already on your machine. Simply run python in your shell (use Terminal.app on a Mac) and you should see something like Listing 1. Listing 1. Make sure Python is running $ python Python 2.5.1 (r251:54863, Nov 11 2008, 17:46:48) [GCC 4.0.1 (Apple Inc. build 5465)] on darwin Type "help", "copyright", "credits" or "license" for more information. Listing 1 - Checking for Python on Mac OS X As long as your system has a version of Python from 2.3 to 2.6, you will be able to install Django. If you are a Microsoft® Windows® user or if you need to upgrade to a newer version, download Python. A simple installation package is available for Windows users, so installing Python couldn't be much easier. When you have verified that Python is installed on your computer, you can proceed to install Django. There are three options: installing an official release, installing a distribution-specific installation package, or installing the latest "trunk" version from Subversion. For the sake of this article, I will only walk through the installation from an official release. For information about installing the trunk version, see the official documentation instructions (see Resources). The first step in installing the official Django release is to get the tarball from the Django download page. Once you have downloaded this file, extract its contents. In Linux, simply issue the following command at your shell prompt (be sure to navigate to the directory where you downloaded the package to). Please note that V1.0.2 was the latest release at the time of writing, so be sure to replace this file name with the exact filename of the package you downloaded: tar zxvf Django-1.0.2-final.tar.gz. In Mac OS X, it's likely that your Web browser automatically decompressed the package when it finished downloading, so the file will be Django-1.0.2-final.tar. To extract this file, simply use the following command: tar xvf Django-1.0.2-final.tar. If you are using Windows, you can use a utility such as 7-Zip to extract the tarball. Now that you have extracted the contents of the tarball (probably to a location like Django-1.0.2-final on your hard drive), navigate to that folder in your command prompt. To install Django, issue the following command (Mac OS X or Linux): sudo python setup.py install. Windows users, make sure your command prompt is opened with administrator privileges and issue the following command: setup.py install. Once you have done this, Django will have been installed into your Python installation's site-packages folder, and you are ready to start developing in Django. Before we move onto the anatomy of a Django application, we will test that our development environment is up and running correctly. First, we will check that Django is installed correctly. Open your shell or command prompt and start the Python interactive tool by issuing the command python. Now issue the commands shown in Listing 2 at the Python prompt (don't type the >>>s): Listing 2. Verify that Django is installed correctly >>> import django >>> django.VERSION If your installation was successful, you should see the text shown in Listing 3. Listing 3. Successful installation (1, 0, 2, 'final', 0) >>> Now that we have verified that Django is actually installed, we will test that the development server is working. To do this, we need to create a project. Create a directory to store your Django projects in (I use /home/joe/django on my Mac OS X system) and navigate to that directory. From there, issue the following command: django-admin.py startproject testproject. This will create a new directory within your projects directory called testproject. This directory contains four files: __init__.py, manage.py, settings.py, and urls.py. Don't worry about what these files do right now; we are going to jump ahead and run the project. Make sure you are in the project folder (use cd testproject at the prompt) and issue the following command: python manage.py runserver. You should see the output shown below. Listing 4. Running the Django development server Validating models... 0 errors found Django version 1.0.2 final, using settings 'testproject.settings' Development server is running at Quit the server with CONTROL-C. This message tells us that the development server is now running at the URL. Open your favorite Web browser and paste in this URL in the address bar. You should see a page like the one shown below. Figure 1. Welcome to Django Page You now have a working Django development environment up and running. It's worth mentioning that although you can run full-fledged Django applications in this environment, it is not suitable for use in a production environment. We will cover deploying Django applications for production use later in this article. The anatomy of a Django application Django's architecture is loosely based on the Model-View-Controller (MVC) pattern in that it separates application logic, user interface (UI), and data-access layers, with the goal of allowing each layer to be modified independently, without affecting the other layers. According to Django documentation, however, Django follows a similar pattern: what it refers to as a Model-Template-View (MTV) architecture. The Model can be seen as the data-access layer, where the application interacts with any databases and information sources. The Template is the layer that defines how the data should be presented to the user, whereas this is considered the View layer in an MVC pattern. In an MTV architecture, the View layer describes what data should be presented to the user. It does not define exactly how it should be presented; it delegates that task to the template layer. As for MVC's Controller layer, Django sees this as being the framework itself, as it determines the appropriate view to send requests to, as defined in the URL configuration. In addition to Models, Templates, and Views, Django offers some advanced features out of the box, such as URL configurations, an automatic administrative interface, caching, and more. Like Python, one of the key philosophies behind Django is the batteries-included approach, meaning that it comes with a large standard library of additional packages you can use in your applications without additional downloads. The model layer of a Django application is handled by Django's data-access layer. Within this layer, you will find everything related to the data: connection settings, validation parameters, relations, etc. Out of the box, Django includes support for PostgreSQL (the preferred database of the creators of Django), MySQL, SQLite, and Oracle. Which database to use is stored in a settings file, and the model layer is the same no matter what option you choose. Models in Django can be seen as descriptions of the database table schemas, represented in Python code. Django uses the model to generate and execute SQL statements in the database, which in turn return a result. Django then translates to a Python data structure, which can be used by your Django application. An obvious advantage here is that you can hot-swap among different databases systems (for example, change from MySQL to PostgreSQL) without having to change your models. The code in Listing 5 is an example of a model definition. This would generally be stored in a models.py file in a Django application's directory. Listing 5. Sample Django Model from django.db import models class Person(models.Model): first_name = models.CharField(max_length=30) last_name = models.CharField(max_length=30) email = models.EmailField() date_of_birth = models.DateField() The Template layer of a Django application allows you to separate the UI or presentation layout of a Web application from its data. It uses placeholder variables and simple logic statements to define what data should be populated into the template. Templates usually produce HTML output, but can also produce XML or any other type of document. The idea behind the Template layer is that the presentation layer code is separate from the business-layer code. This means a Python developer can focus on developing the software, while leaving a Web designer to work on the templates. It also means that the developer and the designer can work on the same project at the same time, as the two components are completely separate from one another. It's important to note that Django's template system does not allow Python code to be executed directly from the template. It offers a rudimentary set of programming-style features, such as variables, logic statements ( if statements), and looping constructs ( for loops), which should offer more than enough logic required for the presentation of data. Listing 6 is an example of what a Django template would look like. Listing 6. Sample Django Template <html> <head> <title>Your message has been sent</title> </head> <body> <h1>Thank you!</h1> <p>Just wanted to say a quick thanks, {{ name }}, for the message you have just sent.</p> <p>It was sent on {{ sent_date|date:"j F Y" }}. We aim to respond within 48 hours.</p> <p>Thanks again!</p> </body> </html> Listing 7 shows how this template would be used in a Django application. Listing 7. Loading the sample Django Template in a view def send_message(request): name = "Joe Lennon" sent_date = datetime.datetime.now() return render_to_response('thankyou.html', locals()) View functions, or "views" as they are more commonly known, are basically Python functions that accept a request parameter and return a response. The request is typically a request from the Web server, and the view takes any parameters passed along with this request. The view then performs the logic required to determine the appropriate response before returning that response. Views can be stored anywhere in a Django application, but are usually stored in a file named views.py. Listing 5 is an example of a view function, named send_message. It accepts the request parameter in its definition and returns a rendered template (thankyou.html) as its response. You just read that views can be stored anywhere. If that is the case, how does Django know where to find them? This leads us on to URLconfs, which define what URLs point to what views. URLconfs are stored in a file called urls.py and basically map a URL to a view function. For example, the url /send_message/ might map to our send_message view, as described in Listing 7. In fact, the way URLconfs work allows them to be used for pretty URLs out of the box — in other words, instead of using query strings like myfile.php?param1=value1, your URL might be /myfile/value1/. The code is Listing 8 provides an example of a urls.py file, which connects the URL /send_message/ to our send_message view function, as defined in Listing 7. Listing 8. Sample URLconf from django.conf.urls.defaults import * from testproject.views import send_message urlpatterns = patterns('', ('^send_message/$', send_message), ) One of the most interesting and talked-about features of Django is its automatic administrative interface. Web application developers who have worked on projects that require back-end admin interfaces to be developed in addition to the front end may be able to relate to the frustration and boredom that comes from developing such interfaces. Admin interfaces are usually mind-numbing, don't require much skill, and don't flex your programming muscle in any way. Django's automatic admin interface feature will be of enormous value, as it takes this requirement out of the equation by automating the entire task. Once you have created the models for your application and set up your database settings, you can enable the admin interface for your application. Once you have done so, simply point your browser to and log in to manage the back end of your Django application. The interface is highly customizable and features excellent user and group-based authentication controls. A screenshot of it in action is shown below. Figure 2. Django automatic admin interface in action We have given a high-level overview of how a Django application is created and of how the MTV pattern it is based on works. We have looked at the concepts of models, templates, views, and URLconfs; and we have seen a glimpse of Django's brilliant automatic administration interface system. If you are looking for an in-depth guide to developing Django applications, visit the official Django project Web site and read the documentation there or read the Django Book (see Resources). Both offer an excellent assumption-free guide to all things Django and cover much more detail than we can get into here. Next, take a look at taking Django applications and deploying them to production servers. Readying your Django application for deployment As we have seen, the Django framework conveniently includes a development server, which is ideal for debugging and testing your Django application. Unfortunately, this server is only designed to run in a local environment and could not withstand the pressures of a production Web application used by many people concurrently. For that, you need to deploy Django to a production-grade Web server, such as Apache or lighttpd. Explore some steps that need to be taken to make your application production-ready, then learn what's involved in preparing your Web server for serving up your Django application. Before discussing how to set up your production environment for your Django application, there are a few things you need to do in your Django application's settings. It is important to make these changes because any vulnerabilities in your Django application may be made public, thanks to debug error messages, etc. Naturally, you won't want to change these settings in your development environment, as debug messages and errors are extremely useful when maintaining your application. To solve this, you could maintain two separate settings files: one for your development server and one for your production server. Alternatively, you could employ the following trick to keep them in the same file and tell Django to only use the development settings if it is in the development environment. To do this, you would lay out your settings.py file in the following way (obviously, replace joe-mac-mini with your development server's hostname, as shown in Listing 9). Listing 9. Separate settings for development and production environments import socket if socket.get_hostname() == 'joe-mac-mini': #Development Server Settings go here else: #Production Server Settings go here Now that we've looked at keeping separate settings for our two environments, let's examine the settings we need to change in our production environment. The two essential settings you must change in your production environment are DEBUG and TEMPLATE_DEBUG. These are set to True by default when you create your Django application using django-admin.py startproject. It is essential that you change this to False for your production environment. In your settings.py, in the Production section, this line should read as follows: DEBUG = TEMPLATE_DEBUG = False. By default, Django is set up to send an e-mail anytime an unhandled exception is raised in your Django application. To enable this feature, tell Django who it should send the e-mail to. This is done with the ADMINS setting in the settings.py file. Listing 10. Defining application administrators ADMINS = ( ('Joe Lennon', 'joe@joelennon.ie'), ) If you come across errors in your code when developing your Django application, you may have noticed the error pages Django generates, full of helpful information to assist you in finding the root of the problem. When you switch debug mode off, these nice error pages disappear, as they are a potential security threat. As a result, if someone comes across an error (for example, a 404 Page Not Found, 403 Forbidden, or 500 Internal Server Error), he will only see an ugly error code page. To rectify this, it is advised to create nice, explanatory error template pages and put them in your application's template folder. Each template should be named according to the error code it represents. For Page Not Found, you should name the file 404.html; for Internal Server Error, use 500.html; etc. Now that we have configured our settings for use in a production environment, we will show you how this environment should be set up to house your Django application. Although it is possible to run Django using FastCGI and lighttpd, a more typical setup is Apache and mod_python. We will now look at how to deploy to a server running Apache and mod_python. Then we'll take a brief look at deploying to a shared Web-hosting environment, where access to httpd.conf is forbidden. Deploying Django applications to Apache with mod_python According to the Django documentation, a setup of the Apache Web server running mod_python is the recommended option for deploying Django applications. Django supports setups with at least Apache HTTP Server V2.0 and mod_python V3.0 and later. mod_python is an Apache module that integrates support for the Python programming language into the Web server. It is much faster than the traditional CGI method of executing Python scripts. To load the mod_python module into Apache, add the following line to your server's httpd.conf file: LoadModule python_module /usr/lib/apache2/modules/mod_python.so. In addition to loading the mod_python module, you also need to set up a Location directive that tells Apache what URL to associate with your Django application. For the sake of example, the settings here are what would apply to the testproject project created earlier. Listing 11. testproject Location directive <Location "/testproject"> SetHandler python-program PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE testproject.settings PythonDebug Off </Location> This tells Apache that your Django testproject project is accessible via the /testproject URL. For example, if your server's domain name is example.com, your application would be accessible via. To load these new settings into Apache, simply restart the Apache server. Django's developers highly recommend that you do not serve media files (such as images, video, audio, etc.) from the same Web server as your Web application, but in many cases, that is not an option — at least initially. In order to set up an area of your Web site where media files can be served, you can add the following directive to your httpd.conf file. Listing 12. Telling Apache not to use mod_python for media files <LocationMatch "\.(png|gif|jpg|mov|mp3|avi|wav)$"> SetHandler None </LocationMatch> That's all there is to setting up Apache and mod_python for deploying Django to a production Web server. Next, we'll take a look at a common deployment scenario where someone is deploying to a shared Web-hosting server where they are not allowed to modify httpd.conf. Deploying Django applications to a shared Web-hosting environment Unfortunately, dedicated servers and virtual private servers tend to be quite expensive and, as a result, these are not viable options for deployment for everyone. It is common that you will first deploy a Web application on a shared hosting environment, upgrading to dedicated solutions as the application grows in popularity. Luckily, most shared Web-hosting providers include Python support, so it is possible to deploy Django applications in this scenario. Unlike a dedicated environment, end users typically do not have the option of running a separate server process or editing the httpd.conf configuration file. This means they cannot make the changes outlined in the previous section, so they cannot get Django up and running in this manner. Fortunately, it is possible to deploy Django to shared hosting environments, using Web server-spawned processes that execute a FastCGI program. Create a file called .htaccess and place it in the same directory you are deploying your Django application to. Listing 13. .htaccess file AddHandler fastcgi-script .fcgi RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ testproject.fcgi/$1 [QSA,L] Then create a small Python script that informs Apache of the various settings for your Django project and executes the FastCGI program. The name of the file is not important, but it must be the same as the file name in the RewriteRule line of .htaccess. In Listing 14, we used the file name testproject.fcgi, so that's what I will call my script. Listing 14. testproject.fcgi file #!/usr/bin/python import sys, os sys.path.insert(0, "/home/joelennon/python") os.environ['DJANGO_SETTINGS_MODULE'] = "testproject.settings" from django.core.servers.fastcgi import runfastcgi runfastcgi(method="threaded", daemonize="false") Be sure to make the file executable. If you have shell access to your shared hosting server, log in and change to the directory the file is contained in and run chmod 755 testproject.fcgi. If you do not have shell access, you can change file permissions with most decent FTP clients. Anytime you change your application's code, you need to change the timestamp on this file. This tells Apache that the application has been updated, and it will continue to restart your Django application. If you have shell access, this is as simple as running touch testproject.fcgi. If you do not have shell access, you can update the timestamp of the file by reuploading it or by editing it and resaving it. If you prefer not to get your hands dirty with these configuration files, you could always avail of a hosting service designed to support Django applications out of the box. The popular MediaTemple hosting provider offers a Django GridContainer add-on to its GridService offering, starting from $20/month for 256 MB of RAM. The GridContainer runs on a pre-tuned lighttpd/FastCGI setup, and the amount of RAM can be increased to scale along with your application. Scaling your Django deployment If your Django application is successful, it is likely that you will need your deployment to be as scalable as possible. It's common for Web applications to work fine under average loads, but phenomena like the Digg effect can forward so much traffic to the application that it buckles under the surge in load. Fortunately, Django and Python are highly scalable by nature, but there are other things you need to consider as your application grows. If you are running your Django application on a shared hosting environment and are starting to feel that it is outgrowing the limited resources available to it in such a space, your obvious first port of call is the move to a dedicated machine. If cost is an issue, virtual private servers offer an inexpensive intermediary between shared hosting and a dedicated server. As your application grows, even the resources of a dedicated server can become sparse very quickly. The following are some remedies that may ease the burden on your server: - Turn off any unused processes or daemons, such as mail servers, streaming servers, gaming servers or any other unnecessary processes, that are hogging precious CPU time and RAM. - Farm off your media files to a cloud platform, such as Amazon S3. This will allow you to only use your Web server for Django requests and keep your media on a separate server. - Turn off Apache's Keep-Alive option. Keep-Alive is an extension to the HTTP protocol, which allows persistent connections over HTTP. This, in turn, allows multiple requests to be sent down over the same TCP connection, dramatically speeding up the serving of static HTML documents and images. Unfortunately, this extension can have a negative effect on the performance of a Django application. Please note that you should only turn this option off if you have moved your media files to a different server. To turn off Keep-Alive, find the relevant line in your httpd.conf file and change it to Keep-Alive Off. - Use Django's built-in cache framework. It is powered by memcached, a popular distributed memory-object caching system. Effective caching can drastically enhance the performance of your Django applications. - Upgrade your server. Dedicate as much RAM as possible, and if disk space is an issue, consider adding a new disk. If your server is struggling, it's more than likely that it's due to the RAM being used up. Don't waste your money upgrading processors — spend the money on RAM, instead. - Buy another server. There may come a time when your server just can't handle the load from your Django application on its own. As soon as your single server setup starts to give way, add another server. You should run your Web server on one machine and your database server on the other. Be sure to use the machine with more RAM for your database server. If required, upgrade the new machine with more RAM and disk space when necessary. - Use database replication. If your database server is running out of resources, you can ease the burden by replicating it across more than one server. Once replication is in place, you can add servers when required to provide additional resources. - Add redundancy. With large-scale applications, having a single point of failure for your Web server or database server is a disaster waiting to happen. You should add redundant servers where possible, which will take over in the event of the primary server failing. Additionally, using load-balancing hardware or software, such as mod_proxy, to distribute the traffic across your servers can drastically increase the performance of your application. It is important to consider a scaling path for your Django application as early as possible. This allows you to put a plan of action in place that covers every possible scenario. So, if the Digg effect has your application on its knees, you can simply kick the next stage of your scaling plan into action and embrace your new users with open arms — and more importantly, a fast-running application. Summary We have looked at the Django framework from both ends of the spectrum — from a developer with no prior experience using the framework starting out to someone who has a fully ready Django application waiting to be deployed looking for a guide on how to tackle deploying the application to the Web. We've also examined what to consider in terms of scalability for the future. We looked at what a Django application is made up of and learned about the Model-Template-View (MTV) pattern it is based on. We have seen that Django is a lightweight, easy-to-install, and easy-to-learn framework with excellent documentation and a vibrant community surrounding it, making it a great framework for your next Web application. Resources Learn - Check out the Django documentation for a plethora of tutorials and articles about Django and how to use it to develop powerful Web applications. - Django Book: Read a free online version of an excellent book about Django, covering everything from introductory concepts to deployment, internationalization and security. - Dive Into Python: Read a free online version of this comprehensive book on the Python programming language. - Read the Wikipedia entry for Django. - Visit DjangoSnippets.org for a multitude of reusable, user-contributed Django code snippets you can freely use in your own projects. - "Develop for the Web with Django and Python" offers an in-depth guide to the fundamentals of developing with the Django framework. - developerWorks Web development zone: Expand your site development skills with articles and tutorials that specialize in Web technologies. - - Visit the Django Project to download the Django framework and subscribe to the official Django blog. -.
http://www.ibm.com/developerworks/opensource/library/os-django/index.html
CC-MAIN-2013-48
refinedweb
4,972
54.32
It is very difficult to find a convincing practical example that uses attributes that is general enough to convey the principles and ideas. If you look at the way attributes are used in C# and in the Framework you will soon discover that they have a limited application without extending the way that they behave to affect the compiler.Of course using custom attributes you can’t do this. Some built-in attributes do give you a good idea of what they are good at without extra tricks. For example, the whole p/invoke and com interop marshaling of parameters is a perfect example. You apply marshaling attribute tags to the parameters of external methods you want to call and when you use them the Framework/CLR uses these attributes to decide how to transform managed data into the correct unmanaged data and vice versa. What makes this a good use of attributes is that the marshaling has to be applied at the class level and once applied isn’t going to be changed at the instance level, and the code that processes the instances can keep all of the mechanisms that load and respond to the attributes applied hidden from view. As far as the user of the attributes is concerned it’s a matter of applying the attribute, then ignoring it in all the code they write but when they use some supplied machinery the attribute makes it work correctly. Consider for a moment the problem of assigning a format to the fields of a struct using a new attribute - Formattable. Implementing a rough sketch of such a facility reveals quite a lot about the difficulties inherent in doing a “good job” using attributes. The idea is simple, the implementation quickly becomes complicated. First we need a new attribute: [AttributeUsage(AttributeTargets.Field, AllowMultiple = false, Inherited = false)]public class Formattable : Attribute{ public string format; public Formatable(string _format) { format = _format; }} There is nothing new here but it is worth saying that it would be better to implement the format string as a get/set property. This attribute can now be applied to any field in any struct or class. Notice that you can restrict its application to say a field in a struct rather than a class. Now that we have an attribute we can apply it: public struct MyData{ public int notacost; [Formatable("Money")] public int cost;} Here we have a simple struct with two int fields one of which, cost, will hold a currency value and hence is tagged with the Formattable attribute set to Money. Immediately we have a problem in that the user can enter any string they care to into the new attribute – not just the few we want to allow. One way of restricting the input to an attribute is to define an enum. For example: public enum FormatType{Money,Percentage}; With this enum the attribute definition changes to: [AttributeUsage(AttributeTargets.Field, AllowMultiple = false, Inherited = false)]public class Formattable : Attribute{ public FormatType format; public Formatable(FormatType _format) { format = _format ; }} and the struct becomes: public struct MyData{ public int notacost; [Formatable(FormatType.Money)] public int cost;} Now the user really can only enter values that you provide. The next task is to create some machinery to process the attribute. In this case we can assume that some sort of method has to be provided that displays the struct, taking notice of the attributes. The simplest way of providing this machinery is to introduce a static class – after all why should we need to create instances of this machinery? public static class Formatter{ The first problem is what to pass to the method that does the formatting? <ASIN:1430229799> <ASIN:0262201755> <ASIN:0596800959> <ASIN:047043452X> <ASIN:193435645X>
http://www.i-programmer.info/programming/c/972-custom-attributes.html?start=4
CC-MAIN-2013-48
refinedweb
622
56.69
Re: counting nils in a list - From: nick_keighley_nospam@xxxxxxxxxxx - Date: Mon, 16 Mar 2009 07:58:36 -0700 (PDT) On 14 Mar, 19:18, p...@xxxxxxxxxxxxxxxxx (Pascal J. Bourguignon) wrote: david <not...@xxxxxxxxx> writes: i was inspired by this code from graham's acl: [...] to write a little program to generate forsythe edward notations. i changed it a bit to make it work for my program. now i try to write my own compress function to just handle my particular case. thinking this will be easy since i believe i understand the code above. and also that will be simpler than a general compress function. i also want to learn to design the recursive function with car, cdr, and cons. so given a list like '(nil nil B nil nil nil |k| nil) i want it to return (2B3k1) my usual approach of typing at the repl until something works has failed me. i need a better design methodology. all thoughts, comments, flames appreciated. thanks, david You can start from the result you want: <some result> What is it? How can you get it? It is <some description> We can get it using the function <some funcs> DONE. Yes, really! I can see you don't believe me. Check it: <some test> We just need now to compute value of the symbol <some result> What is it? How can you get it? <snip> So all we have to do now is code up Pascal's algorithm as a macro and we'll never have to work again... In conclusion: it's really quite simple, you only have say what result you need, to define and describe it precisely, then to determine how you can get it from some intermediary results and simple operation, and to repeat until the intermediary results are actually the input parameters. During all the process, you can check the chunks of code you have so far work, applying them on a few example of intermediary results. It is also possible to start from the input parameters, and step by step try to produce the end results, but it is often easier to do the reverse. way cool! . - Follow-Ups: - Re: counting nils in a list - From: Pascal J. Bourguignon - References: - counting nils in a list - From: david - Prev by Date: CMUCL 19f released - Next by Date: Re: counting nils in a list - Previous by thread: Re: counting nils in a list - Next by thread: Re: counting nils in a list - Index(es):
http://coding.derkeiler.com/Archive/Lisp/comp.lang.lisp/2009-03/msg00671.html
CC-MAIN-2015-35
refinedweb
416
72.05
Counting uppercase? Part 3 Stacey Johnson Ranch Hand Joined: Jan 11, 2004 Posts: 55 posted Feb 14, 2004 11:41:00 0 Hi out there, Okay I'm slowly figuring things out, my biggest problem was the "isUpperCase()" method, and I don't know if I've fully fixed it. My program does compile now but when I enter a string into the command line for example NoOn, I expect the output to be: "N appears 1 time" "O appears 1 time" But instead I get the following error as an output: Exception in thread "main" java.main.ArrayIndex.OutOfBoundsException: -19 at Exercise7_7.countLetters (Exercise7_7.java:123) at Exercise7_7.main (Exercise7_7.java:102) My updated code is as follows: import javax.swing.JOptionPane; public class Exercise7_7 { /** Main method */ public static void main(String[] args) { //Initializes word variable String word = args[0]; //Check command-line argument if (args.length !=1) { JOptionPane.showMessageDialog(null, "You did not enter a string", "Exercise 7.7, TME 2", JOptionPane.INFORMATION_MESSAGE); System.exit(0); } // Invoke the countLetters method to count each uppercase letter int[] count = countLetters(word); // Display results for (int i = 0; i < count.length; i++) { if (count[i] != 0) System.out.print((char)('a' + i) + " appears " + count[i] + ((count[i] == 1) ? " time\n" : " times\n")); } //Ends for loop }//Ends main method // Count each letter in the string public static int[] countLetters(String word) { int[] count = new int[26]; for (int i = 0; i < word.length(); i++) { if (Character.isUpperCase(word.charAt(i))) { } count[(int)(word.charAt(i) - 'a')]++; } return count; }//Ends countLetters method } //Ends public class If anyone can give me some insite as to what the problem is, I would appreciate it. Thank you in advance Stacey Nils Hofmann Greenhorn Joined: Jan 10, 2004 Posts: 9 posted Feb 14, 2004 12:20:00 0 Hi Stacey, well, 'A' i at position 65, whereas 'a' is at postion 97. So if you run your programm with the Parameter NoOn, the following happened at the line count[(int)(word.charAt(i) - 'a')]++; word.charAt(i) returns 78, the position of 'N'. 78-97=-19 and that causes the Exception, because negative array indices are not allowed. So, you have to replace the 'a' with and 'A', further the curly brace for the if-statement should go 2 lines below, that this works correctly. that's all. Nils Mark Vedder Ranch Hand Joined: Dec 17, 2003 Posts: 624 I like... posted Feb 14, 2004 13:14:00 0 Hi Stacey, A couple of things I've noticed. First off, take a look at this if statement inside your countLetters method: for (int i = 0; i < word.length(); i++) { if (Character.isUpperCase(word.charAt(i))) { } count[(int)(word.charAt(i) - 'a')]++; } If you fix your indentation to line up nicely, you will probably see it better: for ( int i = 0; i < word.length(); i++ ) { if ( Character.isUpperCase(word.charAt(i)) ) { } count[(int) (word.charAt(i) - 'a')]++; } What code is the if statement ececuting right now? Is that what you want to happen? I think probably not Using proper indentation will helps you (and any programmer) a lot in spotting these type of errors. Take a look at that a minute, and I have some other comments for you that I'll post in a few moments... Mark Vedder Ranch Hand Joined: Dec 17, 2003 Posts: 624 I like... posted Feb 14, 2004 13:59:00 0 Stacey, Something that will help you as you progress in your studies is to understand what an Exception means so you know what to look at in your code. It will also tell you (basically) where to look for the problem in your code. A java.main.ArrayIndex.OutOfBoundsException says that your code is trying to access an array element at an index, but that index is "out of bounds" � that is it's either too small (i.e. less than 0 since arrays start at index 0) or too large (i.e. greater than array.length � 1 since arrays are 0 based the largest index value is one less than its length) Therefore, if I have an array with three elements at indexes 0, 1, and 2; the bounds of my array are 0 and 2. So if my code ever tries to access an element outside those bounds (like array[3] or array[100] or array[-1]) that piece of code will throw an OutOfBoundsException. The message will tell you what the index value was that you tried to access: Exception in thread "main" java.main.ArrayIndex.OutOfBoundsException: -19 The "-19" tells us that your code tried to access an index of -19 in an array. The next lines are what is called a Stack Trace � these lines trace through the Stack of the JVM to show where the error occurred. In this case, it is a nice short Stack Trace: at Exercise7_7.countLetters (Exercise7_7.java:123) at Exercise7_7.main (Exercise7_7.java:102) It tells us that in the Class Excersice7_7, the countLetters method threw the exception. In particular the code on line 123 caused the exception. The countLetters method was called by line 102 of the main method of the Exercise7_7. Using a Stack Trace can help you trace through your code to figure out where the error occurred. The problem will not always be right at the line of code that the Stack Trace lists. The problem could be in previous code that sets a variable or a condition that manifests itself latter and causes the exception. For example, in this code: 1 public class FindIt 2 { 3 public static void main(String[] args)0 4 { 5 doSomething(); 5 } 7 8 public static void doSomething() 9 { 10 String[] anArray = {"Alpha", "Delta", "Gamma", "Zulu"}; 11 int index = someMethodThatReturnsAnInt(); 12 //some code... 13 14 //some more code... 15 16 String valueNeeded = anArray[index]; 17 System.out.println(valueNeeded); 18 } 19 20 public static int someMethodThatReturnsAnInt() 21 { 22 //Some Fancy code that declares 23 //retVal as an int and calculates it to 24 // be 10000 for this run of the program. 25 return retVal; 26 } 27} When I run this, I get the following exception: Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException : 10000 at FindIt.doSomething(FindIt.java:16) at FindIt.main(FindIt.java:5) So this tells me that line 5 of the main method of the FindIt class called The doSomethingMethod, and line 16 of the doSomething method tried to access index 10000 of an array, but that 10000 was outside the array's bounds. So line 16 of my code is causing the exception to be thrown, but the real "problem" is that the someMethodThatReturnsAnInt() method is returning a value that is unrealistic for my array. Therefore, I either need to fix that method, or catch and handle the exception in my doSomething method (I am not sure if you have gotten to Exception Handling and try/catch blocks yet in your studies, so I will not mention anything further at this point.) I hope that all that will help you learn how to track down the problems in your code. Nils pointed out the specific issue as to why you are getting the exception with your code, but I think it will help you find problems if you understand how to read Exception messages. You also have one other problem with your code. See my next post... [ February 14, 2004: Message edited by: Mark Vender ] Mark Vedder Ranch Hand Joined: Dec 17, 2003 Posts: 624 I like... posted Feb 14, 2004 14:16:00 0 Ok, one last problem your program... Try running your program without a word at the command line. In other words, try running it as: java Exercise7_7 You�ll notice you are getting another IndexOutOfBounds exception thrown. See if you can figure out why. I�ll give you a big hint You do not need to add or remove any code to fix it. In fact, previous versions of your program did not have this issue. Just think about what your code is doing and when it is doing it. See if you can figure it out... If not, reply to this post and I or someone else can help you find it. Stacey Johnson Ranch Hand Joined: Jan 11, 2004 Posts: 55 posted Feb 14, 2004 17:27:00 0 Thanks you all so much for your help. As special thanks to Mark for not giving me the answer right out but making me think about it. I appreciate it. This beginner Java course is alot of work but I'm slowly catching on. This is a great sight for help. I don't know what I would of done if I didn't know about it. Again thanks so much for your help, it's kept me sane!! I'm going to switch hear for a few hours and turn to my operating systems course. Stacey I agree. Here's the link: subject: Counting uppercase? Part 3 Similar Threads Simple Palindrome Program Counting uppercase? Need help extending a program. Return number of characters in a Word Counting uppercase? Part 2! All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/395574/java/java/Counting-uppercase-Part
CC-MAIN-2015-35
refinedweb
1,537
72.26
Hi Jim, > From: Jim Meyering > > Is Visual Studio C++ 6.0 an important (by some measure) > development platform? Let's just say the story of Microsoft, Windows and their tools is a well known and in some circles beloved story. > Isn't there a more-nearly POSIX-conforming > development environment that you can use? Strictly speaking yes, I've identified 2 gcc Windows ports but resources constrain getting the Windows CVS build there. > I hate to spend time > worrying about systems that don't even come close to providing > POSIX conformance. Microsoft Windows becoming POSIX compliant reminds me of very cold days in hell. I agree a large commitment of time is not appropriate here. > In any case, I hope we can do something cleaner than including > <winsock.h>. If I were to change nanosleep.c to do this: > > #if HAVE_SYS_SELECT_H > # include <sys/select.h> > #endif > > your system could get by with a definition like this in config.h > > #define HAVE_SYS_SELECT_H 1 > > and then a package-supplied sys/select.h file containing the > required declarations in a directory searched by the compiler, > specified via `-I dir' or whatever the equivalent is. > > That approach is more in line with the goal of keeping the > code from becoming encumbered with hacks designed solely to > accommodate non-conforming systems. Your suggestion looks entirely reasonable to me however I won't be doing most of the work since I'm not a CVS project committer. I hope Derek Price and Jim Hyslop can add definitive opinions as both have been active on this topic. Many thanks, > Jim Conrad
http://lists.gnu.org/archive/html/bug-cvs/2004-05/msg00186.html
CC-MAIN-2014-10
refinedweb
264
57.57
When you request any page into MVC Application, it will go through the Routing Architecture. Your Routing system will decide your URL pattern. The default routing algorithm is like {controller}/ {action}/ {id} patterns. But it can be possible to change that pattern using Custom Routes. {controller}/ {action}/ {id} In this article, we will show how to Create Custom Routes, Debugging Routes and Router Constraints. You can configure Route into the Global.aspx's Application Start Event. When you are creating an MVC application, it will already define Default route. Default route can be configured using the following code: routes.MapRoute( “Default”, //Route Name “{controller}/{action}/{id}” //Url With Parameters , new { controller = “Home”, action = “Index”, id = ““} // Parameter Defaults ); It is possible to develop an MVC application without creating a Custom Route. But sometimes, there is a situation to change the pattern of URL in which it makes sense to create Custom Routes. Now, we take a simple example of the custom routes. Imagine that you want to create a blog application and you want to route a request look like: /Archive/12-25-2010. The first step is to create an empty MVC application. And Add Controller File named BlogController.cs and Add Actions named Archive and insert that code will look like below: public class BlogController : Controller { // // GET: /Blog/ public string Archive(DateTime? entryDate) { return "You wants Blog Entry on Date:=" + entryDate.ToString(); } public ActionResult Insert() { ViewData["Message"] = "Call From the Insert Action When Get Http Occurs"; return View(); } } Now go the Global.asax file and add the following code into the Register Routes method. //Simple Custom Route with out any Constraints routes.MapRoute( "BlogWithoutConstraint", "Archive/{entrydate}", new { Controller = "Blog", action = "Archive", }); This code used to add Custom routes and it will match the URL like, Archive/12-25-2010. When you create a Custom Route, you can also include route constraints. These constraints are used to restrict the requests to match routes. There are three basic types of constraints: HttpMethod CatchAll You can use regular expression constraints to check the pattern of the Route and prevent invalid requests. Regular expression is used to check any pattern like Currency, Date, time, etc. In the Above BlogArchive Example, Url likes Archive/12-25-2010 and Archive/mybirthday. It will allow. Now we have to prevent parameter which is not a date. So it is possible using regular expression constraints. For that, add the following code into Global.aspx file Register Route method. //Custom Route With Regular Expression Constraints routes.MapRoute( "Archive/{entrydate}", new { Controller = "Blog", action = "Archive" }, new { entryDate = @"\d{2}-\d{2}-\d{4}" }); The above code matches the entry date must match pattern of two decimals followed by dash followed by two decimals followed by dash followed by four decimals. You can match route with any type of the HTTP operation like POST, GET, etc. you want to prevent user to access particular URL when GET operation occurs but not POST operation occurs. You can also used AcceptVerbs instead of HttpMethod constraints. GET GET AcceptVerbs For example, BlogArchive example inserts Action only executes when HTTP GET Operation performs. For that, you have to add the following code into Global.asax file Register Route method. inserts Action //Custom Route With HttpMethod Constraint routes.MapRoute( "HttpMethodConstraintRoute", "Blog/Insert", new { Controller = "Blog", action = "Insert" }, new { method = new HttpMethodConstraint("GET") }); Generally, your URL must match with number of segments, but you want to match your URL, regardless of the number of the segments for that, we have to create Catch-All Parameter. Now take a simple example of the Catch All Routes for that add one controller named CatchAllController and add the following code into the controller. CatchAllController public class CatchAllController : Controller { // // GET: /Sort/ public string Index(string AllValues) { string[] PassedValue = AllValues.Split('/'); return "You are Passing Values:" + String.Join(",", PassedValue); } } Now to catch all parameters, add the following code into the global.asax file’s register route method. //Custom Route Catch-All Routes routes.MapRoute( "CatchAllRoute", "CatchAll/{*AllValues}", new { Controller = "CatchAll", action = "Index" }); Now it will match the following URL like, CatchAll/index/1/2/3/4/5/6/7 and CatchAll/a/b/c/d/e/f/g/h/. A catch-all parameter must appear as the last parameter. Think of a catch-all parameter as a parameter array. After creating Custom Routes, you have to check whether your routes work perfectly with your given URL or not. For that, you have to debug your routes. To Debug Routes, you have to download one DLL from the and add a reference into your MVC Application. Now debug route you have to write the following line of the code into the Global.asax Application Start Event. RouteDebug.RouteDebugger.RewriteRoutesForTesting(RouteTable.Routes); And run your application. You will see the below screen in your browser: You can see all the custom routes into All Routes Table and it will show that your browser URL matches with which Routes. If it will match any Routes, it will show true into the Match Current Request Routes, otherwise it will show false. You can check with another URL, for that you have to write URL into the Address Bar. true false Now write URL like, CatchAll/1/2/3/4/5/6/. It will match with the Catchall CustomRoutes. Catchall CustomRoutes In this way, you can check with other Routes too. It is possible that your URL matches with more than one Route, at that time the first match Routes Executes. The goal of this article is to demonstrate the flexibility of the routing system provided by ASP.NET MVC. Hope this helps! This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) General News Suggestion Question Bug Answer Joke Rant Admin Math Primers for Programmers
http://www.codeproject.com/Articles/299531/Custom-routes-for-MVC-Application
CC-MAIN-2013-20
refinedweb
969
66.13
LPC4088 Experiment Base Board - Projects LPC4088 Experiment Bundle This page lists a large number of projects for the LPC4088 Experiment Base Board, which together with the LPC4088 QuickStart Board forms the LPC4088 Experiment Bundle. Below is a picture of this bundle. List of projects - Demo application (contains examples for all peripherals on the board) - Graphical demos for 4.3/5 inch LCD (several demos) - Audio framework - ePaper demo - Graphical demos for displays from - Demos with Cellular and Positioning shield - IoT example with Cellular and Positioning shield Demo Application This application contains examples for all peripherals on the board. It is used during production test and can also be used for troubleshooting. Import programlpc4088_ebb_ptp Production Test Program (PTP) for the LPC4088 Experiment Base Board The Demo Application contains one class for each test/demo like this: Graphical Demos for 4.3/5 inch LCD The projects listed below all works with both the 4.3" and 5" displays but requires configuration in main.cpp to choose which one to use. Audio Framework Displays from DisplayModule These examples all use displays from. This example is for the 1.8" display (DM-TFT18-101) using the 40-pin connector on the Experiment Base Board. Note that this connector does not come pre-soldered. Import programlpc4088_ebb_dm_40pin_spi Example of using the DM-TFT18-101 display This example will work on all of the displays shown here. Note that only the DM-TFT18-101 display can be connected to the 40-pin connector - all other displays must use the arduino compatible connector on the LPC4088 Experiment Base Board. For the displays without direct arduino pinning this requires an adapter. For more information see the overview. Import programlpc4088_ebb_dm_bubbles Example showing the use of displays from DisplayModule.com with the LPC4088 Experiment Base Board This example is a basic calculator to show the use of the touch controller. It will work on all of the displays shown here which have touch support. All the touch enabled displays use the arduino compatible connector on the LPC4088 Experiment Base Board. For the displays without direct arduino pinning this requires an adapter. For more information see the overview. This shows the LPC4088 Experiment Base Board with the DM-TFT28-105 display mounted: Import programlpc4088_ebb_dm_calc Basic calculator example to show the LPC4088 Experiment Base Board with a touch LCD from DisplayModule ePaper Display This example is for the 2.7" ePaper display from Embedded Artists. The picture below shows the ePaper display connected to the Serial Expansion Connector. Import programlpc4088_ebb_epaper Example with the 2.7" epaper display EM027BS013 Character LCD This example requires a standard character LCD, of the type presented here: Text-LCD. The picture below shows the ePaper display connected to the LPC4088 Experiment Base Board. Import programlpc4088_ebb_textlcd Example of using a Text LCD with the LPC4088 Experiment Base Board Cellular and Positioning shield Below is a picture with the Cellular and Positioning shield mounted. The shield is built around u-blox high-quality cellular and positioning modules. Very simple, yet working perfect, UART pass-through application, when running u-blox m-center PC application. The cellular modem UART channel is tunneled to the mbed HDK virtual COM port (which the m-center application can connect to). UART pass-through application #include "mbed.h" Serial pc(USBTX, USBRX); // tx, rx Serial ublox(D1, D0); // tx, rx int main() { DigitalOut resetPin(D4, 1); // SARA-U2/LISA-U2 50..80us resetPin = 0; ::wait_us(50); resetPin = 1; ::wait_ms(10); // SARA-G35 >5ms, LISA-C2 > 150ms, LEON-G2 >5ms resetPin = 0; ::wait_ms(150); resetPin = 1; ::wait_ms(100); pc.baud(115200); ublox.baud(115200); while(1) { if(pc.readable()) { ublox.putc(pc.getc()); } if(ublox.readable()) { pc.putc(ublox.getc()); } } } All the GSM/GPS examples require the following to work: - A working SIM card for the GSM network you want to connect to - The APN (Access Point Name) for the SIM. The APN is the server that allows the modem to access the internet. - An external power source for the GSM/GPS module. It is recommended to use a phone charger with a micro USB connector and not one of the port on the computer. Each example must be configured with the APN information above before use. Lookup the following block in main.cpp: GSM/GPS module configuration //------------------------------------------------------------------------------------ // You need to configure these cellular modem / SIM parameters. // These parameters are ignored for LISA-C200 variants and can be left NULL. //------------------------------------------------------------------------------------ #include "MDM.h" //! Set your secret SIM pin here (e.g. "1234"). Check your SIM manual. #define SIMPIN NULL /*! The APN of your network operator SIM, sometimes it is "internet" check your contract with the network operator. You can also try to look-up your settings in google: */ #define APN "online.telia.se" //! Set the user name for your APN, or NULL if not needed #define USERNAME NULL //! Set the password for your APN, or NULL if not needed #define PASSWORD NULL //------------------------------------------------------------------------------------ This example will try to fetch and post some information using HTTP requests. Only the fetching will actually work as the posting requires permissions. but the code is still good as a reference. Import programlpc4088_ebb_ublox_Cellular_HTTPClient_HelloWorld Example showing the ublox Cellular GPS/GNSS module with the HTTPClient library to fetch and upload web pages. This is a comprehensive example that will read GPS data, use TCP sockets and respond to SMS messages. For more information look at the C027_SupportTest support page. Import programlpc4088_ebb_ublox_Cellular_GPS_SMS Example showing how to use the ublox Cellular GPS/GNSS module. The program will use GPS to get location and time, setup TCP connections and finally send and receive SMS messages IoT example with Cellular and Positioning shield The following two examples use the PubNub data stream network to publish/subscribe to data. Note that some work is required to get this example working: - Update the program code with the publish/subscription keys received for the new account. - Read the PubNub documentation in order to understand the system - Understand that the free account has some limitations Import programlpc4088_ebb_ublox_Cellular_PubNubDemo Example showing the ublox Cellular GPS/GNSS module with the online PubNub service on an LPC4088 Experiment Base Board Import programlpc4088_ebb_ublox_Cellular_PubNubDemo_rtos Example showing the ublox Cellular GPS/GNSS module with the online PubNub service on an LPC4088 Experiment Base Board. This example uses an RTOS. Please log in to post comments.
https://os.mbed.com/users/embeddedartists/notebook/lpc4088-experiment-base-board---projects/
CC-MAIN-2017-43
refinedweb
1,048
55.84
Introduction to PHP OOP Interview Questions and Answers PHP (Hypertext Pre-processor) is a server-side scripting language used in web development which is based on Object-Oriented Programming (OOP) language. It was designed primarily based on the C++ programming language. It is also used as a general-purpose programming language. It was originally developed by Rasmus Lerdorf in the year 1994. This PHP code can be embedded into HTML code in web development. PHP is a dynamically typed programming language which supports different Operating Systems such as Windows, Unix, etc. PHP standards, specifications, and implementations are provided by The PHP group. PHP stands for Personal Home Page. Now, if you are looking for a job which is related to PHP OOP then you need to prepare for the PHP OOP Interview Questions. It is true that every interview is different as per the different job profiles but still to clear the interview you need to have a good and clear knowledge of PHP OOP. Here, we have prepared the important PHP OOP Interview Questions and Answers which will help you get success in your interview. Below are the 10 important PHP OOP Interview Questions and Answers that are frequently asked in an interview. these questions are divided into two parts are as follows: Part 1 – PHP OOP Interview Questions (Basic) This first part covers basic PHP OOP Interview Questions and Answers 1. What is a session in PHP? Answer: A session in a PHP is used to store the information or data across multiple web pages on a web browser which is not stored on the local computer’s memory. A session in PHP is started using the function session_start() and the session variables can be set using the global variable called $_SESSION. In the process of creating or starting a session in PHP, the ideal scenario will be a unique identifier will be created during the start of a session, then a cookie will be sent to the user’s machine to identify the unique session and this unique id is stored in the remote server being called to identify the user during the life cycle of that session. Once the session is timed out, then the entire user session data will be deleted. The default session timeout period is 24 minutes in PHP. 2. What are constructor and destructor in PHP? Answer: This is the basic PHP OOP Interview Questions which is asked in an interview. The Constructor in a PHP is used to create an object of its class type which is used to initialize the properties of the object using a method called _construct(). The constructor in PHP can be overloaded by passing the different arguments by calling the same method with the different number of arguments inside the same class declaration. The Destructor in a PHP is used to call when the lifecycle of the object is going to be ended to release the memory occupied by it or to garbage collect the same. The destructor can be called using the method _destrcut(). 4.5 (3,251 ratings) View Course 3. What are the general uses of PHP? Answer: The general uses of PHP are – It can perform system function operations such as create, read, open, close and write them to and from the computer memory. It can access cookies variables and set the values in the browser. It can also provide a restriction to access some web pages that provide security. The data can also be encrypted using PHP. Let us move to the next PHP OOP Interview Questions And Answer. 4. What is an Interface in PHP? Answer: An interface is a contract to provide the implementation of different functions as per the user’s requirement. This can also be called as the blueprint of the required structure. An interface can be used and implemented many ways inside the implementation class. 5. What are the different data types in PHP? Answer: The different data types in PHP are String, Floating point numbers such as double, Integer, Boolean, Array, Object, NULL, and Resource. A PHP String is an array of characters. An integer is a number without decimal points. A Boolean can be true or false. An Object is a type which can be explicitly declared to store different types of data in it by encapsulating it. An array is an identical list of data types stored in a sequence which can be accessed based on an index. A NULL value is also a data type which indicates no value and having a single value NULL. Part 2 –PHP OOP Interview Questions (Advanced) Let us now have a look at the advanced PHP OOP Interview Questions. 6. What is OOP and how PHP is an object-oriented language? Answer: An Object-Oriented programming (OOP) language will have object-oriented features such as Abstraction, Encapsulation, Inheritance, polymorphism etc., PHP was primarily based on C++ programming language which is an object-oriented programming language. A PHP class contains properties and methods which are used to create an object of that class and each object will have its own properties and values. 7. What is a Namespace in PHP? Answer: A Namespace in PHP is used to encapsulate the items which are similar to that of abstraction in Object Oriented programming concepts. Namespaces are used to establish a relationship among classes, functions, and constants. A Namespace can be defined using the keyword The namespace keyword is reserved in PHP for its internal use to avoid conflict with the user created identifiers. Let us move to the next PHP OOP Interview Questions And Answer. 8. What is a final keyword in PHP and when it is used? Answer: The final keyword in PHP is used to mark either a class or a function as final. If a class is marked as final, it cannot be extended to use its properties or methods. It prevents its child classes from overriding a method. If only a function is marked as final, then it can’t be extended. The final keyword was introduced in PHP 5 version. A final declaration can be done by prefixing the class name or function name with the final keyword. 9. What is echo and print in PHP? Answer: The echo and print are two basic ways of getting output in PHP. The functionality of echo and print are identical in PHP to get the output. The differences between echo and print are: echo doesn’t return any value whereas print returns a value of 1 which can be used in expressions. Echo takes multiple arguments whereas print takes a single argument. Echo can be used by calling echo or echo() whereas print can be called by using a print keyword. 10. What is require and include in PHP? Answer: This is the advanced PHP OOP Interview Questions which is asked in an interview. The require and include statements in PHP are available from PHP 4 and above versions which are identical with some difference. Include statement includes the specified file and evaluates it whereas the requirement is also having the same feature but it produces a fatal error upon failure with an error E_COMPILE_ERROR level error which is a compilation error. In this case, it stops the execution of the script and halts the run-time flow. The files will be included based on the file path given in the script. Recommended Article This has been a guide to List Of PHP OOP Interview Questions and Answers so that the candidate can crackdown these PHP OOP Interview Questions easily. Here in this post, we have studied about top PHP OOP Interview Questions which are often asked in interviews. You may also look at the following articles to learn more – - Some Important Cheat sheet for C++ Programming Language (Basics) - Jenkins Interview Questions - HTML5 Interview Questions And Answers - Amazing Python Interview Questions - Linux Interview Questions And Answers - Jenkins vs Bamboo: Benefits - Jenkins vs Travis CI: Benefits - Jenkins vs TeamCity - ACCA vs CIMA: What are the Differences
https://www.educba.com/php-oop-interview-questions/
CC-MAIN-2020-16
refinedweb
1,340
62.27
TL;DR – A Python queue is a linear data structure, similar to a stack. Contents How to use a queue in Python To start building Python queues, you need to import the queue Python module first: import queue Python 1.4 and all the newer versions have this module available for use. It allows you to implement Python multithreading queues: - To add an element to the queue, use put(). This is called an enqueue operation. - To remove an element from the queue, use get(). This is called a dequeue operation. - The FIFO (First In, First Out) principle means the first element you insert will also be the first to be removed. By using the maxsize argument, you can choose to restrict the number of elements in a queue. To leave the size unlimited, define it as 0 or a negative number. > Python queue examples: four types To create a basic Python queue, use queue.Queue(): BitDegree = queue.Queue(maxsize=0) BitDegree.put("B") BitDegree.put("i") BitDegree.put("t") print (BitDegree.get()) You can also create a queue in Python that follows the LIFO principle (Last In, First Out). In such a case, the first element to remove will be the one that got added last. To do that, use queue.LifoQueue(): import queue BitDegree = queue.LifoQueue(maxsize=0) BitDegree.put("B") BitDegree.put("i") BitDegree.put("t") print (BitDegree.get()) If you need to sort your elements in a specific way, you can also create a priority queue. For that, you would use queue.PriorityQueue(). In such a case, the first element to remove will be the one that has the lowest value: BitDegree = queue.PriorityQueue(maxsize=0) BitDegree.put((3, "B")) BitDegree.put((1, "i")) BitDegree.put((2, "t")) print (BitDegree.get()) If you're using Python 3.7 or a newer version, you can also use queue.SimpleQueue() to create simple Python queues with less functionalities – e.g, they will not allow task tracking: BitDegree = queue.SimpleQueue(maxsize=0) BitDegree.put("B") BitDegree.put("i") BitDegree.put("t") Python queue: useful tips - To simplify working with priority queues, follow the number, elementpattern and use the numberto define priority. - To create Python multiprocessing queues (as opposed to multithreading), use multiprocessing.Queue()function for the multiprocessingmodule.
https://www.bitdegree.org/learn/python-queue
CC-MAIN-2022-40
refinedweb
377
60.82
Calling an object from another class and printing out result - any help? celine scarlett Ranch Hand Joined: Nov 06, 2005 Posts: 93 posted Nov 19, 2005 17:08:00 0 Hi, I have written a class called Title, which contains information about films, including the title, lead actors, director etc. This information is stored in a text file, which is then called from the class Title. To save me from having to rewrite this information in the second class, called Video, I want to be able to call the information in the Title class in the toString method within the Video class and print it out along with the additional information contained within the Video class. The problem is that at the moment, the toString method within the Video class is only returning 'null' in place of the information from the Title class. Any ideas or help would be appreciated. My code so far is as follows, 1. The Video Class, package videos; import simplejava.*; public class Video { private int days, referenceNumber; private static double dailyPrice; private double totalPrice; private Title title; public boolean available; public Video(int refNum, int d, double dp, Title title) { referenceNumber = refNum; days = d; dailyPrice = dp; this.title = title; } public Video(SimpleReader in) { referenceNumber = in.readInt(); } public int getReferenceNumber() { return referenceNumber; } public void setReferenceNumber(int refNum) { referenceNumber = refNum; } public int getDays() { return days; } public void setDays(int d) { days = d; } public double getDailyPrice() { return dailyPrice; } public void setDailyPrice(double dp) { dailyPrice = dp; } public void getTotalPrice() { totalPrice = days*dailyPrice; } public boolean isAvailable() { if (available) return true; else return false; } public String toString() { return referenceNumber + " " + title + " " + totalPrice + " " + available; } } 2. The Title class, package String toString() { return titleCode+":"+filmName+":"+leadActor1+":"+leadActor2+":"+director; } } Also, the toString method in the Title class works fine by itself, I'm just having a problem trying to call this information from the Video class. Any ideas how to achieve this? Ernest Friedman-Hill author and iconoclast Marshal Joined: Jul 08, 2003 Posts: 24199 34 I like... posted Nov 19, 2005 18:12:00 0 Your Video class has two constructors. If you use the first one, and pass in a Title object, then you shouldn't see this null; but if you use the second one, then the "title" member is null because it's never assigned anywhere, and you will see "null" in toString(). Each of your constructors needs to fully initialize the class. [Jess in Action] [AskingGoodQuestions] celine scarlett Ranch Hand Joined: Nov 06, 2005 Posts: 93 posted Nov 19, 2005 18:29:00 0 Hi, I think I understand what you mean in this case. However, if the information for the Title class is contained in a text file, which is currently called from the tester main class using the following code, SimpleReader inFile = new SimpleReader("title_test.txt"); SimpleReader inFile2 = new SimpleReader("video_test.txt"); SimpleWriter screen = new SimpleWriter(); Title test = new Title(inFile); Video tester = new Video(inFile); screen.println(test); screen.println(tester); I don't understand how I can call the Title information in the Video class, and still get it to print it all out in the current tester main class. I must be missing something here. If I can call this information in the Title class, I don't understand why this same information can't be added to the Video class as well, and then be printed out using a main class. Any help or advice!! Ernest Friedman-Hill author and iconoclast Marshal Joined: Jul 08, 2003 Posts: 24199 34 I like... posted Nov 19, 2005 20:00:00 0 In the last bit of code you showed me, you created a Title object and a Video object, but the Video object doesn't "know" about that Title object. You could just slightly modify the Video constructor: public Video(SimpleReader in, Title t) { referenceNumber = in.readInt(); title = t; } Then create the Video like this: Title test = new Title(inFile); Video tester = new Video(inFile, test); Now, there are other ways to do it, and I wouldn't call this the best way -- far from it, actually, as this arrangement with multiple objects reading their data from a single file, depending on being constructed in a certain order, is pretty nasty. A somewhat better thing to do might actually be public Video(SimpleReader in) { referenceNumber = in.readInt(); title = new Title(in); } because at least there is less potential for error. But that's up to you. celine scarlett Ranch Hand Joined: Nov 06, 2005 Posts: 93 posted Nov 20, 2005 03:15:00 0 Hi, Thank you so much for your help. I see exactly what you mean now. Another Java puzzle solved at last. Have a great weekend. I agree. Here's the link: subject: Calling an object from another class and printing out result - any help? Similar Threads Problem when reading a CSVs file as separate words - any help!! Boolean vlaue for a video store program - please help!! Creating a pointer/reference to another object - any help!! Video store program - persisting problems. Any advice !! Reading strings from a line problem _ any help!! All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/378392/java/java/Calling-object-class-printing-result
CC-MAIN-2015-48
refinedweb
866
60.75
I want select an area and rotate is content. But this content mustn't be scaled (it's why FitInside and FitOuside are not good) and without black zone (or zone is outside of the picture). If you prefer i want : 1- select an area, 2- rotate picture with area center as transform center, 3- crop the rotated picture Edit: yes, use 3 corner is a solution. Last edited by yan_; 2013-08-19 at 19:46. I've begin my second article. You can find the explanation in beta version about the problem and how i create static IFilter CreateCropFilter(Rect cropArea, double rotationAngle) As far as image filters are concerned this SDK serves the purpose. But IMHO, I will not call it as Imaging SDK as it works in a part of imaging domain(feature detection, HDR etc). I would call it Image filter SDK. But this is not important. Extension Points: The important thing is that, AFAIK, SDK has no extension points. If I need to create custom filters which are not covered by SDK, I have no way. I have no problem with static, sealed FilterFactory class which cannot be extended in traditional ways as it is sealed nor I can write extension methods to it as it static. I tried to implement IFilter interface, I got image from editing session in Apply method but I have no way to pass it back to the session after applying filter algorithm. This is a 'must have' feature for the SDK. Last edited by Rumnha; 2013-08-25 at 13:20. Why rendering time increase with scale value between the input and the output (zoom out). In this case, i don't thinks you use a simple interpolation. Maybe it's could be interesting to specify which interpolation SDK must use? Hello, this is my rather larger feedback to the Nokia Imaging SDK First of all, I’ve just checked most of wiki pages related to this SDK, downloaded provided samples and checked some SDK methods and I’m really surprised how this SDK is simple to use and how many filters it provides. I was expecting something really complicated, undocumented with ~15 average filters, but the actual SDK is the exact opposite, thanks for that. Well but nothing is ideal, here are my ideas, how to make this SDK even better. The Nokia Wiki related to the Imaging SDK Overall the Wiki contains all I could expect regarding the Imaging SDK – I appreciate the step by step guide for creating my first app using this SDK and also the Core concepts page describing the usual workflow. - It would be great to have some samples and wiki pages for C++ developers as well. I guess 4/5 of all WP8 developers are using C#, but having direct tutorials for C++ devs could get you definitely more audience. The pages describing created samples are fine, but: - The File Explorer sample page has some minor errors in Architecture overview – there is no “WPF” in Windows Phone, the author probably meant XAML. Another idea that occurred to me, what I’m missing, is: - One wiki page with big Filter overview, where user can find all available filters including basic documentation what every parameter does and including preview images with applied filters, ideally for each available filter parameter. This would save us lot of time, if anyone wants to create simple “Instagram” clone with only about 5-10 filters, so he won't have to tediously try each of them, how do they look. The provided Samples Just some code review regarding the samples: - I’d personaly use much shorter and readable implementation for properties implementing INotifyPropertyChanged, for instance StreamItemModel.Filter: Code:public FilterModel Filter { get { return filter; } set { Set(ref filter, value); } } private FilterModel filter; #region INotifyPropertyChanged implementation private const string FakePropertyName = @"A property name must be specified if not using C# 5/VS2012"; protected bool Set<TName>(ref TName field, TName newValue, [CallerMemberName] string propertyName = FakePropertyName) { if (EqualityComparer<TName>.Default.Equals(field, newValue)) { return false; } field = newValue; RaisePropertyChanged(propertyName); return true; } public virtual void RaisePropertyChanged([CallerMemberName] string propertyName = FakePropertyName) { PropertyChangedEventHandler handler = PropertyChanged; if (handler != null) { handler(this, new PropertyChangedEventArgs(propertyName)); } } #endregion - As I described in “one big filter overview page”, it would be nice to have similar sandbox sample app for testing any available filter ideally with options to set filter properties to any supported values API feedback The API seems solid, there is only few things to point out regarding Filter parameter properties: - CreateAutoEnhanceFilter needs better documentation. AutoEnhanceConfiguration – methods on this class are quite strangely designed: ApplyLocalBoost, ApplyLocalBoostOff, ApplyAutomaticLocalBoost, etc. I’m not sure which ones works with or against other ones. - CreateBlurLevel, CreateDespeckleFilter, CreateNoiseFilter, CreateSharpnessFilter and maybe other filers are using enum values similar to “Low, Medium, High”. I’d probably prefer float parameter with available value between 0.0 – 1.0 to be allowed to create more types of blur/noise filters. - CreateCartoonFilter – instead of vague bool parameter ‘bool distinctEdges’ I’d rather use Enum for better readability and possible future adding of another cartoon filter option. - There is a huge variability of used options types for filters – int, uint, double, float, byte… maybe using just int on all places, where uint is used, and float on all place where double is used could lead to more clean API design, but that’s just my point of view. - CreateSolarizeFilter and some other filters, here I’d add better documentation/xml comment to tell the user, what actually mean [0..1] – what effect has 0 and what 1. It’s possible some users might be confused, how strong the resulting effect will be if 0.25 is used vs 0.75? - Maybe each filter type factory method should be available without any mandatory parameters as well, for instance I want to quickly create app with applied Lomo filter, so I’d just use FilterFactory.CreateLomoFilter() instead of guessing, if “0.5, 0.5, LomoVignetting.Medium, LomoStyle.Neutral” are fine parameters for generic Lomo filter. FilterGroup – nice idea how to create filter “batches”, but: - I’ve found a bug in there – there is no validation when calling new FilterGroup(params) for null values in the array, imagine this: Code:IFilter[] fga1 = new IFilter[2]; fga1[0] = FilterFactory.CreateMilkyFilter(); FilterGroup fg1 = new FilterGroup(fga1); - Also it would be completely valid and even more descriptive, if the constructor parameter of FilterGroup will be just IEnumerable<IFilter>, no need for IFilter[] Another question is regarding the IFilter interface: - Is it possible from dev point of view to somehow implement own IFilter? For Instance I want to create own filter, that applies two specific existing filters at once, like this - is this a valid class that could be used? Code:public class DoubleMilkyFilter : IFilter { readonly IFilter fil = FilterFactory.CreateMilkyFilter(); public void Apply(IEditingSession editingSession) { fil.Apply(editingSession); fil.Apply(editingSession); } public void Undo(IEditingSession editingSession) { fil.Undo(editingSession); fil.Undo(editingSession); } } - The problem here is the Apply and Undo method on IFilter has in comment “Do not call this method directly, use IEditingSession::Apply/Undo”, but the thing is, there is no Apply/Undo method on IEditingSesstion , at least no public method to use. So I guess this needs to be fixed either in API, or in documentation at least., at least no public method to use. So I guess this needs to be fixed either in API, or in documentation at least. Feature requests Although the library is nicely stable and quite feature complete, I got some feature request as well: - Please create port of this library for Windows 8 apps as well! My guess is it should not be that hard, the WinPRT is really similar to WinRT and I cannot think of any WP-specific API you might’ve used in the current library, that would block the porting. - I know, this is a long shot, but having some kind of light version of this SDK for WP7.5 would be nice too (note it’s possible to use async/await in WP7.5 projects using Microsoft NuGet package) - Right now there are no extensibility points in the app for creating own filters, only batch filters. Actually all these filters are from dev point of view only “black boxes”. - As some devs said before me, there is only limited option, how to modify the processing queue or filter parameters – no option for inserting, deleting specific filters, etc. - Maybe I’ve missed it somewhere, but I’d love to have option for directly saving the processed image as png including transparency? I’ve seen only method for saving to Jpeg, WritableBitmap and Image? Maybe it’s possible right now from the WriteableBitmap. - I’d appreciate some kind of “Write text filter”, so I can add text to any image with the option of custom font, font size, font style, position Rect… - Is it possible to use this library in background task for generating live tile images from set of original source png files? I’m already doing something similar in one of my apps, putting image + text + badge on a tile and saving it as transparent png + displaying it on a live tile/lock screen. Wish list I’d really love to have more in Nokia Imaging SDK than just “lot of filters + cropping/resizing features”. This last paragraph is mostly about such features, that goes beyond the initial feature set: - Full support for reading, changing or deleting jpg EXIF data - Possibility to create animated gifs programmatically just like in Cinemagraph app. - Simple OCR feature just for returning strings contained within the image. - Face or movement detection, that is used in various webcam security apps. - API for creating panorama photos. - Go Open Source? That's all for now, I might add some other minor comments that might come to my mind while testing these samples more thoroughly. When zooming at the pixel level, decoding/processing one macroblock will provide 64 pixels of the final image (a macroblock is 8x8 pixels). As you zoom out, the pixels needed to create the final image are further and further apart : you need to decode more and more macroblocks to fetch the pixels that will be on the final image, and while each macroblock that must decoded is still a 8x8 macroblock, only a few of the pixels of the block will be used in the final image, others are discarded. The increase in the amount of macroblocks that must be decoded explains the increase in the rendering time. When you zoom out far enough, you may end up in a situation where you only need one pixel per macroblock. Through some special tricks, it's really fast to get that single pixel : rendering becomes real fast! You will find here a nice overview of the JPEG compression process Trying to briefly answer some of the issues raised... I love the idea of having one application that shows all the filters and allow changing the parameters. The "filter Explorer" and the "Filter effect" sample applications are a good step towards that direction, but as you noticed they do not cover each of the 50+ filters. We wanted to keep the samples relatively simple, thus restricted ourselves to only a few filters. It would be great if one of these days a user of the SDK would create a full blown filter demo! For your png and exif needs, you should take a look at other libraries: and. Right now implementing your own IFilter is not possible. It's really high on backlog for the next release. Harmonizing the filter parameters and Windows 8 support is also really high on that list. Other wishes have been noted, hopefully Santa will deliver ;-) The typos and the documentation errors you're reporting are real good catches, we will fix them. Again, thanks for the good feedback, -Bert Hi, I just want to agree with everything said before. My wishlist include the ability to create my own filters since alpha blending is not possible with the current set and a few additional filters are missing. Also, can you split filters in separate NuGet package? This way the pipeline and the classes are separate from the actual filters involved. Can I publish source code used for the competition as public on GitHub? It is easier for long term maintenance and people could get the latest changes I make. This means that my sample applications will become open source. I hope that there is nothing against that. However, that means that the wiki article will get out of sync with the latest version in terms of UI, capabilities and screenshots. Just an update to my previous feedback regarding custom filters and sample app for testing all available filters including available options. - It should be possible to create filters using standard object instantiation rather than only using Factory methods. - Each filter should have parameterless constructor and optionally constructor with available parameters. - Note there shouldn't be more different constructor types with different parameters, that's currently issue of some filters, BlurEffect for instance. - It should be possible to modify filter properties once the filter is created. What would be even better, if the filter Properties used some kind of attributes to describe allowed value ranges and default values:What would be even better, if the filter Properties used some kind of attributes to describe allowed value ranges and default values:Code:IFilter lomoFilter = new LomoFilter(); IFilter lomoFilter2 = new LomoFilter(0.5, 0.5, LomoVignetting.Medium, LomoStyle.Neutral); lomoFilter.Vignetting = LomoVignetting.Low; ... and so on for other filters.... and so on for other filters.Code:public sealed class LomoFilter : IFilter { [FilterValueRange(0.0, 1.0, 0.5)] // min, max, default value public double Brightness { get; set; } [FilterValueRange(0.0, 1.0, 0.5)] // min, max, default value public double Saturation { get; set; } [FilterValue(LomoVignetting.Medium)] // default value public LomoVignetting Vignetting { get; set; } [FilterValue(LomoStyle.Neutral)] // default value public LomoStyle Style { get; set; } public LomoFilter(){ ... } public LomoFilter(double brightness, double saturation, LomoVignetting vignetting, LomoStyle style){ ... } public void Apply(IEditingSession editingSession) { // internal filter implementation } public void Undo(IEditingSession editingSession) { // internal filter implementation } } This way it will be really simple to create some kind of "All filter overview" app, that could generate filter editor pages automatically by gathering filter Property attributes and automatically creating Slider/ComboBox user controls for each editable property value. Sliders for ranges, ComboBoxes for enums, etc. Another idea about the Nokia Wiki that came to my mind, Id appreciate if the wiki contained information how some of these filters are actually implemented, from mathematical point of view. I'd definitely read that! Thanks again and I'm looking forward to interesting updates of this SDK Hi Martin, I just started implementing a layer on top of the current FilterFactory that would allow just that. I wrote my own classes to wrap existing filters inside. Once I have something to show, I will put it on Github. Maybe you would be interested in that? I second your request for the mathematical implementation. The best solution would be to open source the SDK. That way we can port all filters to other platforms which would guarantee consistency across platforms. I thought about some kind of wrapper too, that would just use the Factory methods internally. Let us know, once you have it
http://developer.nokia.com/community/discussion/showthread.php/241537-Nokia-Imaging-SDK-Feedback-(Competition)?p=924405&viewfull=1
CC-MAIN-2014-42
refinedweb
2,558
51.78