text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringlengths
9
15
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Shuffling the deck: an interview experience Here is a story about an interesting interview question and how I approached it. The company in question wasn’t interested in actually looking at my code, since I apparently tried to answer the wrong question. Given a deck of n unique cards, cut the deck c cards from the top and perform a perfect shuffle. A perfect shuffle is where you put down the bottom card from the top portion of the deck followed by the bottom card from the bottom portion of the deck. This is repeated until one portion is used up. The remaining cards go on top. Determine the number of perfect shuffles before the deck returns to its original order. This can be done in any language. A successful solution will solve the problem for 1002 cards and a cut size of 101 in under a second even on a slow machine. I looked at that and did what they tell you to do for interviews, and coding in general, especially when you don’t know where to start: start with the naive, simple approach. Step 1. make_deck cards = [x for x in range(1,n+1)] Step 2. def shuffle(cards,c): """ :param: c, where to cut the deck (int) """ top = cards[0:c] bottom = cards[c:] stopping_criteria = min(len(top), len(bottom)) newstack = deque() for i in range(stopping_criteria): newstack.append(top.pop()) newstack.append(bottom.pop()) if (len(top)==0) and (len(bottom)==0): return newstack elif len(top) > 0: newstack.extendleft(top) elif len(bottom) > 0: newstack.extendleft(bottom) return newstack Step 3. def shuffle_recursive(cards, c, shuffle_count): """ shuffle until the original order is restored, and count as you go. assuming for now that original order is sequential and first card is always 1. :param n: deck size to pass to shuffle function (int) :param c: cut size to pass to shuffle function (int) :param newstack: variable to hold the list during shuffling :return: (newstack (shuffled list), shuffle_count (int)) as a tuple >>> shuffle_recursive([1,2,3,4,5], 3, 0) 4 """ newstack = shuffle(cards,c) shuffle_count +=1 if list(newstack) == [x for x in range(1, len(cards)+1)]: #stopping criteria return shuffle_count else: return shuffle_recursive(list(newstack), c, shuffle_count) So I did that, and was surprised to get a recursion depth error. Then I realized it only works up to the max recursion depth of 999. Also, it was obviously too slow. So I did some profiling, and found that the majority of time was spent in these 3 lines: for i in range(stopping_criteria): newstack.append(top.pop()) newstack.append(bottom.pop()) And that kind of surprised me, since I thought the whole point of deque() is that it’s supposed to be faster. So then I spent some time thinking about how I could possibly make the code go faster. Ultimately I ended up directly creating the interleaved bottom part of the deck, and then added the top. I noticed that the tricky part was dealing with the leftover cards. I also noticed that it took a lot fewer iterations to get back to the starting order if I reversed the top cards before I put them back. Then I hooked that up to run iteratively, so I could control the number of times it ran, for debugging, etc. The code is here if you want to see what I did. I wrote a bunch of tests while I was doing this, like I always do, and I couldn’t help noticing that there were some weird edges cases that never worked. I tried to read some advanced math articles, which led me to understand that the weird edge cases I was seeing were probably harmonics. Then, because I’m really a data scientist at heart, I wanted to see what that looked like. I wrote a couple of methods to help me visualize the results. Overall, I’d say it was a great coding challenge, really interesting and I learned a lot. However. When I went to turn in my work, the response was less than encouraging. I wrote: I came up with a simple, very slow (10 second+ run-time) solution fairly quickly, and then spent 3-4x more time coming up with a 10x faster solution. What I have right now meets the requirement for 1002 cards with cut size 101 in under a second on my mac laptop (see below - not sure what you define as a “slow machine”?). And the reply came back: What answer did your solution arrive at for the test case? Is is 790034? That’s not correct, so if that’s the case you should take another look. It should only take a tenth of a second or so. Apparently I was so annoyed at the way this exchange ended that I deleted both my response (let’s consider it redacted) and theirs. I said something about how if the point was that it was a coding exercise, maybe they’d want to see my code even if I got a different answer (I did)? They said I should have known I wasn’t supposed to try to actually make the decks based on how the question was worded. I did not know that. I’m not sure why it’s so hard to just ask a straightforward question instead of including, as part of the challenge, that I should be able to read your mind. Anyway, they did not want to see my code. Shortly thereafter, I asked a friend who is more of an algorithms person and he said “Oh yeah, all you do is write the equation for a single card to get back to its original position, and then you have the answer.” Of course, I found that confusing, because based on what I did, I don’t think it’s really that simple. I think it depends on how you do the shuffling, e.g. whether you reverse the top half when you add it back on. Which the original question said nothing about. And some cards (as the edge cases show) will take a much longer time to get back to their original position, depending on where you cut the deck and how many shuffles you do. So, my shuffles might be imperfect, and my ability to read interviewers’ minds hasn’t improved much. But hey, those harmonics are pretty interesting.
https://szeitlin.github.io/posts/engineering/shuffling-the-deck-an-interview-question/
CC-MAIN-2022-27
refinedweb
1,074
69.52
Member 35 Points Apr 25, 2020 04:44 PM|hamed_1983|LINK Hi my problem is that when i disable my input control as follow : <div class="form-group"> <label asp-</label> <input asp- <span asp-</span> </div> I'm facing this error during update : Microsoft.EntityFrameworkCore.DbUpdateConcurrencyException: Database operation expected to affect 1 row(s) but actually affected 0 row(s). Data may have been modified or deleted since entities were loaded. See for information on understanding and handling optimistic concurrency exceptions. But when i remove disable tag from my input element it works correctly!!! What's the problem & how to solve it ? Note : The given input field (OrderId) is primary key for my Orders table. Thanks in advance Contributor 4923 Points Apr 25, 2020 06:38 PM|DA924|LINK Microsoft.EntityFrameworkCore.DbUpdateConcurrencyException: Database operation expected to affect 1 row(s) but actually affected 0 row(s). Data may have been modified or deleted since entities were loaded. See for information on understanding and handling optimistic concurrency exceptions. What the error message means is that you sent an entity to EF to be updated on a database for a table record and the update didn't happen 0 rows were affected. You can't send the EF entity to be updated without the primary-key property for the EF entity having a valid value. The ID should have been saved to hidden field pointing back to a viewmodel. You should learn how to use a viewmodel. EF query maps an entity to a VM and the VM is sent into the view. On data persistence with EF, the VM populates the EF entity that is sent for add or update. Member 35 Points Apr 25, 2020 06:43 PM|hamed_1983|LINK Thanks for reply. but As i told in my first post, it exists in my view, just disabled! I don't know why when it's disabled, my update fail & when it enabled my update works correctly! Contributor 4923 Points Apr 25, 2020 08:45 PM|DA924|LINK hamed_1983 Thanks for reply. but As i told in my first post, it exists in my view, just disabled! I don't know why when it's disabled, my update fail & when it enabled my update works correctly! No, that's a clunky way of doing it, and I don't think it's working. You can validate and see what is in the object at the time you're trying to persist it to the database by using Quickwatch. If the ID is jacked, then EF is not going to find the record to update. Myself, I would never use an input tag in the manner that you're trying to do for a primary-key ID property. using System.Collections.Generic; using System.ComponentModel.DataAnnotations; namespace PublishingCompany.Models { public class ArticleVM { public class Article { public int ArticleId { get; set; } public int AuthorId { get; set; } public string AuthorName { get; set; } [Required(ErrorMessage = "Title is required")] [StringLength(50)] public string Title { get; set; } public string Body { get; set; } } public List<Article> Articles { get; set; } = new List<Article>(); } } @model ArticleVM.Article <!DOCTYPE html> <style type="text/css"> .editor-field > label { float: left; width: 150px; } txtbox { font-family: Arial, Helvetica, sans-serif; font-size: 12px; background: white; color: black; cursor: text; border-bottom: 1px solid #104A7B; border-right: 1px solid #104A7B; border-left: 1px solid #104A7B; border-top: 1px solid #104A7B; padding-top: 10px; } textarea { width: 500px; height: 75px; } </style> <html> <head> <title>Edit</title> </head> <body> <h1>Article</h1> @using (Html.BeginForm()) { @Html.ValidationSummary(false, "", new { @ @Html.Label("Title:") @Html.TextBoxFor(model => model.Title, new { @ @Html.Label("Body:") @Html.TextAreaFor(model => model.Body) </div> <br /> <br /> <p> <input type="submit" name="submit" value="Save" /> <input type="submit" name="submit" value="Cancel" /> </p> </fieldset> } </body> </html> Apr 28, 2020 06:48 AM|Rena Ni|LINK Hi hamed_1983, Browsers do not support disabled property to submit with forms by default.You need to change disabled to readonly: <input asp-for="OrderId" class="form-control" value="@ViewData["_orderID"]" readonly /> Best Regards, Rena All-Star 48490 Points Apr 28, 2020 07:21 AM|PatriceSc|LINK Hi, The problem is that disabled (and if I remember readonly) form fields are not posted. Note that using F12 it would be easy to stil change this value. More likely you should check on the server side that the user is allowed to change this line (assuming you are trying to avoid tampering). 5 replies Last post Apr 28, 2020 07:21 AM by PatriceSc
https://forums.asp.net/t/2166318.aspx?Facing+error+Microsoft+EntityFrameworkCore+DbUpdateConcurrencyException+when+input+control+disabled+
CC-MAIN-2021-10
refinedweb
755
54.42
Technical Articles How to implement question scoring/weightage on Qualtrics and push it back to a service – Qualtrics Technical Part 2 Introduction Qualtrics is a great tool for building and distributing surveys related to different operations. In the previous blog post we have seen how to integrate Qualtrics surveys with our back end services. Now, lets discuss a new use case. What if we have to calculate score of a survey response and send it back as well. These scores are calculated based on custom weightage assigned to different answers of a question. In this blog post we will explore Qualtrics scoring feature and see how to get back the score. This blog post is a part of my Qualtrics technical series Requirements - Qualtrics Production account. Trial account doesn’t provide actions and triggers which will be used in this case. - A simple back end service to test the integration. I have deployed a Python flask service that just prints the data received from Qualtrics. - Basic Knowledge of Qualtrics like creating surveys, survey flows etc. Process So, with all that defined, lets get started with the actual thing. There are four main steps that we have to perform to calculate the score of a survey and push the data. But before that lets check the demo service controller which is as follows. from flask import Flask, request, jsonify from waitress import serve import json import os app = Flask(__name__) port = int(os.getenv("VCAP_APP_PORT", 9000)) @app.route('/') def check_health(): return jsonify( status=200, message="Up" ) @app.route('/showSurveyData', methods=['POST']) def show_survey_data(): data = json.loads(request.get_data()) print(data) return jsonify( status = 200, results = data ) serve(app, port=port) As you can see, we just have two end points. One for health check and the other one for displaying the survey data pushed back to the service. You can use your own service here to do more with the data. One thing to mention, the Qualtrics instance we are using is on public cloud. So it can only access services that are deployed on public cloud. That’s why I have deployed this service on hanatrial account for now. Survey In this demo, we are using the same survey from previous blog post which contains only 2 questions for simplicity as follows. Scoring Now, lets assign some weightage to the answers of these questions. To access the scoring portal, click the gear icon under any question and click on “Scoring” as follows. In the scoring portal we can easily assign weights to each answer. In this case I have assigned ‘1’ to each of the slider. Because, in case of slider the formula to calculate weight is: weight_of_answer = weight_of_slider * slider_value So in this case as weight of slider is ‘1’, so the calculated weight from the slider will just be equal to the slider value. The weights will look as follows. Once, weights are assigned, we can click on the “Back to Editor” button on top to go back to the survey layout. Embedding Weight Once the weights are assigned, Qualtrics will automatically calculate the weight for each answer and add them together at the end to get the total score of the survey. Now, we need to define some embedded variable on Qualtrics which will store this calculated score. So that we can access it later on. For this we have to create an “Embedded Data” block where we will create one embedded variable. To add one “Embedded Data” click on “Survey Flow” button in the Survey tab and add one embedded data block as follows. In this block, I have specified an embedded variable called “weight” which stores the “Score” metric calculated by Qualtrics as follows. Once done, we can save the survey flow and get back to the Survey tab. The Embedded block should look like this now. Actions We have an embedded variable which will store the response scores for each response. No, we can also push this score along with other survey data using an action as shown in the previous blog post. This time we will add another item called “score” and assign embedded field “weight” to the score as follows. Once added the task will look as follows. Now, we can save the task and now most importantly we have to publish this survey by going back to the Survey tab. Testing To test this application, we can simply open the preview mode and complete the survey. Now, if we check the logs of our service we should be able to see the score along with rest of the survey data as follows. And this concludes the topic. If you really liked it make sure to leave your feedback or suggestions down below. Important Links Next Blog post: How to perform sentiment analysis on Qualtrics and push sentiment data back to a service – Qualtrics Technical Part 3 Previous Blog Post: How to push Qualtrics Survey Responses to back end service – Qualtrics Technical Part 1 Entire Series: Qualtrics Technical Blog Series
https://blogs.sap.com/2019/11/12/how-to-implement-question-scoring-weightage-on-qualtrics-and-push-it-back-to-a-service-qualtrics-technical-part-2/
CC-MAIN-2022-27
refinedweb
837
71.75
I have found two primary libraries for programmatically manipulating PDF files; PdfBox and iText. These are both Java libraries, but I needed something I could use with C Sharp. Well, as it turns out there is an implementation of each of these libraries for .NET, each with its own strengths and weaknesses: Some Navigation Links: - Example: Extract Text from PDF File - Example: Split PDF - Split Specific Pages and Merge Into New File - Links to Resources Mentioned in this Article PdfBox – .Net version The .NET implementation of PdfBox is not a direct port – rather, it uses IKVM to run the Java version inter-operably with .NET. IKVM features an actual .net implementation of a Java Virtual Machine, and a .net implementation of Java Class Libraries along with tools which enable Java and .Net interoperability. PdfBox’s dependency on IKVK incurs a lot of baggage in performance terms. When the IKVM libraries load, and (I am assuming) the “’Virtual’ Java Virtual Machine” spins up, things slow way down until the load is complete. On the other hand, for some of the more common things one might want to do with a PDF programmatically, the API is (relatively) straightforward, and well documented. When you run a project which uses PdfBox, you WILL notice a lag the first time PdfBox and IKVM are loaded. After that, things seem to perform sufficiently, at least for what I needed to do. Side Note: iTextSharp iTextSharp is a direct port of the Java library to .Net. iTextSharp looks to be the more robust library in terms of fine-grained control, and is extensively documented in a book by one of the authors of the library, iText in Action (Second Edition) . However, the learning curve was a little steeper for iText, and I needed to get a project out the door. I will examine iTextSharp in another post, because it looks really cool, and supposedly does not suffer the performance limitations of PdfBox. Getting started with PdfBox Before you can use PdfBox, you need to either build the project from source, or download the ready-to-use binaries. I just downloaded the binaries for version 1.2.1 from this helpful gentleman’s site, which, since they depend on IKVM, also includes the IKVM binaries. However, there are detailed instruction for building from source on the PdfBox site. Personally, I would start with the downloaded binaries to see if PdfBox is what you want to use first. Important to note here: apparently, the PdfBox binaries are dependent upon the exact dependent DLL’s used to build them. See the notes on the PdfBox .Net Version page. Once you have built or downloaded the binaries, you will need to set references to PdfBox and ALL the included IKVM binaries in your Visual Studio Project. Create a new Visual Studio project named “PdfBoxExamples” and add references to ALL the PdfBox and IKVM binaries. There are a LOT. Deal with it. Your project references folder will look like the picture to the right when you are done. The PdfBox API is quite dense, but there is a handy reference at the Apache Pdfbox site. The PDF file format is complex, to say the least, so when you first take a gander at the available classes and methods presented by the PDF box API, it can be difficult to know where to begin. Also, there is the small issue that what you are looking at is a Java API, so some of the naming conventions are a little different. Also, the PdfBox API often returns what appear to be Java classes. This comes back to that .Net implementation of the Java Class libraries I mentioned earlier. Things to Do with PdfBox It seems like there are three common things I often want to do with PDF files: Extract text into a string or text file, split the document into one or more parts, or merge pages or documents together. To get started with using PdfBox we will look at extracting text first, since the set up for this is pretty straightforward, and there isn’t any real Java/.Net weirdness here. Extracting Text from a PDF File To do this, we will call upon two PdfBox namespaces (“Packages” in Java, loosely), and two Classes: The namespace org.apache.pdfbox.pdmodel gives us access to the PDDocument class and the namespace org.apache.pdfbox.util gives us the PDFTextStripper class. In your new PdfBoxExamples project, add a new class, name it “PdfTextExtractor,” and add the following code: The PdfTextExtractor Class using System; using org.apache.pdfbox.pdmodel; using org.apache.pdfbox.util; namespace PdfBoxExamples { public class pdfTextExtractor { public static String PDFText(String PDFFilePath) { PDDocument doc = PDDocument.load(PDFFilePath); PDFTextStripper stripper = new PDFTextStripper(); return stripper.getText(doc); } } } As you can see, we use the PDDocument class (from the org.apache.pdfbox.pdmodel namespace) and initialize is using the static .load method defined as a class member on PDDocument. As long as we pass it a valid file path, the .load method will return an instance of PDDocument, ready for us to work with. Once we have the PDDocument instance, we need an instance of the PDFTextStripper class, from the namespace org.apache.pdfbox.util. We pass our instance of PDDocument in as a parameter, and get back a string representing the text contained in the original PDF file. Be prepared. PDF documents can employ some strange layouts, especially when there are tables and/or form fields involved. The text you get back will tend not to retain the formatting from the document, and in some cases can be bizarre. However, the ability to strip text in this manner can be very useful, For example, I recently needed to download an individual PDF file for each county in the state of Missouri, and strip some tabular data our of each one. I hacked together an iterator/downloader to pull down the files, and the, using a modified version of the text stripping tool illustrated above and some rather painful Regex, I was able to get what I needed. Splitting the Pages of a PDF File At the simplest level, suppose you had a PDF file and you wanted to split it into individual pages. We can use the Splitter Class, again from the org.apache.pdf.util namespace. Add another class to you project, named PDFFileSplitter, and copy the following code into the editor: The PdfFileSplitter Class using org.apache.pdfbox.pdmodel; using org.apache.pdfbox.util; namespace PdfBoxExamples { public class PDFFileSplitter { public static java.util.List SplitPDFFile(string SourcePath, int splitPageQty = 1) { var doc = PDDocument.load(SourcePath); var splitter = new Splitter(); splitter.setSplitAtPage(splitPageQty); return (java.util.List)splitter.split(doc); } } } Notice anything strange in the code above? That’s right. We have declared a static method with a return type of java.util.List. WHAT? This is where working with PdfBox and more importantly, IKVM becomes weird/cool. Cool, because I am using a direct Java class implementation in Visual Studio, in my C# code. Weird, because my method returns a bizarre type (from a C# perspective, anyway) that I was unsure what to do with. I would probably add to the above class so that the splitter persisted the split documents to disk, or change the return type of my method to object[], and use the .ToArray() method, like so: The PdfFileSplitter Class (improved?) public static object[] SplitPDFFile(string SourcePath, int splitPageQty = 1) { var doc = PDDocument.load(SourcePath); var splitter = new Splitter(); splitter.setSplitAtPage(splitPageQty); return (object[])splitter.split(doc).toArray(); } In any case, the code in either example loads up the specified PDF file into a PDDocument instance, which is then passed to the org.apache.pdfbox.Splitter, along with an int parameter. The output in the example above is a Java ArrayList containing a single page from your original document in each element. Your original document is not altered by this process, by the way. The int parameter is telling the Splitter how many pages should be in each split section. In other words, if you start with a six-page PDF file, the output will be three two-page files. If you started with a 5-page file, the output would be two two-page files and one single-page file. You get the idea. Extract Multiple Pages from a PDF Into a New File Something slightly more useful might be a method which accepts an array of integers as a parameter, with each integer representing a page number within a group to be extracted into a new, composite document. For example, say I needed pages 1, 6, and 7 from a 44 page PDF pulled out and merged into a new document (in reality, I needed to do this for pages 1, 6, and 7 for each of about 200 individual documents). We might add a method to our PdfFileSplitter Class as follows: The ExtractToSingleFile Method public static void ExtractToSingleFile(int[] PageNumbers, string sourceFilePath, string outputFilePath) { var originalDocument = PDDocument.load(sourceFilePath); var originalCatalog = originalDocument.getDocumentCatalog(); java.util.List sourceDocumentPages = originalCatalog.getAllPages(); var newDocument = new PDDocument(); foreach (var pageNumber in PageNumbers) { // Page numbers are 1-based, but PDPages are contained in a zero-based array: int pageIndex = pageNumber - 1; newDocument.addPage((PDPage)sourceDocumentPages.get(pageIndex)); } newDocument.save(outputFilePath); } Below is a simple example to illustrate how we might call this method from a client: Calling the ExtractToSingleFile Method: public void ExtractAndMergePages() { string sourcePath = @"C:\SomeDirectory\YourFile.pdf"; string outputPath = @"C:\SomeDirectory\YourNewFile.pdf"; int[] pageNumbers = { 1, 6, 7 }; PDFFileSplitter.ExtractToSingleFile(pageNumbers, sourcePath, outputPath); } Limit Class Dependency on PdfBox It is always good to limit dependencies within a project. In this case, especially, I would want to keep those odd Java class references constrained to the highest degree possible. In other words, where possible, I would attempt to either return standard .net types from my classes which consume the PdfBox API, or otherwise complete execution so that client code calling upon this class doesn’t need to be aware of IKVM, or funky C#/Java hybrid types. Or, I would build out my own “PdfUtilities” library project, within which objects are free to depend upon and intermix this Java hybrid. However, I would make sure public methods defined within the library itself accepted and returned only standard C# types. In fact, that is precisely what I am doing, and I’ll look at that in a following post. Links to resources: - PdfBox.Net version on Apache - IKVM.net Home Page - Pdfbox .net binaries version 1.2.1 (includes required IKVM binaries) - PdfBox API Overview John on Google CodeProject Author Ahmad Have you tried to extract/ split pdf by Bookmarks using PDFBox? Author John Atten I have not, to this point, Did you find a workable solution? Author John Atten Hi, and thanks for reading. I will try to put together a VB version of the code in the article, and I'll shoot you an email. Author Weng Fu Could you please also include the Visual Basic source for this PDF soft please?
http://johnatten.com/2013/01/30/working-with-pdf-files-in-c-using-pdfbox-and-ikvm/
CC-MAIN-2022-27
refinedweb
1,843
55.34
Thanks Prescott for bringing this up again :) Paul, the problem is no one can really know what it would take until they have deep dived into the work, and even then decisions could and will change. I will strongly advise against starting from scratch, although I do think a lot in the current structure should change, but its going to be much easier to change it in place, refactoring where needed, than starting all over again. Once we kicked this off I personally will be happy with you taking the analysis part of Lucene and porting it, its pretty much self contained. Re 3.6.2 work - you can just do that on a fork and send us PRs, its much more straight forward than the v4 upgrade Marcos, porting class by class is the fastest way to do this, we can then concentrate on .NETifying and optimizing using .NET constructs. That said, I think the way to go is create a branch out of the current git master HEAD and label it "3.x", and start working on master towards a 4.3 compatible version. The actual port should be using a process that ensures all Java classes are ported with their tests, and that those tests pass - but I'm against committing any Java code to our repositories. The process should probably be working on classes in some order (alphabetically, or core classes first) and getting each class to compile before moving forward. I don't mind about the project not being compilable for a month or two. Once this is done a process of .NETifying and proofing the code can be started, at which point we will already have a working v4 version so it will be easier to keep up with the Java project. The first step IMO is to stabilize the test suite so tests could more or less be copied and pasted (e.g. implementing Java-like assertEquals methods etc; I find xUnit to be much easier to work with than NUnit). I already did some work there but there's still a lot to do. Unfortunately I can't dedicate time myself at this point, but I should be back in business in August, at which point I can dedicate several hours a week. Until then I'll be happy to watch closely and even coordinate the work to some extent. On Thu, Jun 6, 2013 at 10:41 PM, Marcos Lima <marcoslimagon@gmail.com>wrote: > It really sounds good to me, this is a kick start =). I haven't contributed > anything > yet, but I would like to help you all to get this job done. > > I'm completely agree with Paul and Prescott. > > I know that there is a high commitment for keep the retrocompatibility on > Lucene. Does Java Lucene API gets big changes every release? > > Is the Lucene.Net a port from a stable version from a Lucene version, > right? If the Lucene API gets only minor changes every stable release (or > keep most of its source-code), we could compare the versions, check the > differences and bring them to Lucene.Net. Again, I haven't contributed with > yet, so I don't know how it is (just an idea). > > What's your approach for implement simple performance improvements (without > adding references or changing methods signatures)? Does it could be done, > or "follow the java version only"? > > > > 2013/6/6 Paul Irwin <pirwin@feature23.com> > > > This is just an "outsider" suggestion as I haven't contributed anything > > yet, although I will definitely help with the 4.x work as I have a vested > > interest in seeing that get completed. I think there should be one person > > (or perhaps two) guiding what the structure and workflow will look like > to > > avoid chaos. If the 4.x work is going to be starting from scratch as a > > fresh port, that person should set up the project, get everything going > in > > source control, create the folder structure, maybe stub out some classes, > > etc. Then divide and conquer with anyone interested in contributing, > > perhaps by namespace. > > > > I like the idea of throwing the java code in there so it's easy to > > reference. > > > > Again, I can work on Lucene.Net.Documents, Lucene.Net.Analysis, or > > Lucene.Net.Store -- or any others, those are just the ones I'm most > > familiar with the inner workings. Tell me what to do and I'll get started > > on my fork. > > > > Paul Irwin > > > > > > On Thu, Jun 6, 2013 at 2:38 PM, Prescott Nasser <geobmx540@hotmail.com > > >wrote: > > > > > Hey guys - > > > I know I've been MIA a little while. We have a board report due soon - > I > > > think it prudent that we advise them that we seem to have stalled > > somewhat. > > > We've got a few low hanging items out of of jira and have been > responsive > > > on the mailing list to community questions, but I don't think we've > done > > > anything to move forward with 4.0. > > > To that end - I'd like to *try* and start us back up moving forward. > What > > > is the best way to accomplish this? If we took the java lucene 4.0 code > > and > > > committed that java to our branch and then just let people pull that > down > > > and being changing / modifying is that one way to maybe make some > forward > > > progress? > > > ~P > > > > > > -- > -- > Marcos Lima > Software Developer/Tech Lead > marcoslimagon@gmail.com > tel: +55 (19) 9798-9335 >
http://mail-archives.eu.apache.org/mod_mbox/lucenenet-dev/201306.mbox/%3CCAHTr4ZvfXKxo8aQPK0vLC6H2EeggDT+85TMLipsNAagqSt-jJQ@mail.gmail.com%3E
CC-MAIN-2019-39
refinedweb
906
71.75
I have my main mxml file and then I have a module that I have developed... I want to be able to call a function in that module from my main application... This is what I have done, and it keeps giving an error saying this: TypeError: Error #1009: Cannot access a property or method of a null object reference. Here is my code in a nutshell: Main.mxml <mx:WindowedApplication xmlns: <mx:Script <mods:MyObject <mx:Button </mx:WindowedApplication> general.as import mods.*; private function LoadObjData():void { MyObj.CustomFunction("Test Data"); } mods/MyObject.mxml <mx:Script> <![CDATA[ public function CustomFunction(CustData:String):void { MyLabel.text = CustData; } ]]> </mx:Script> <mx:Label //// Can anyone explain to me what I am doing wrong? By the way, my entire script works, minus the part when you click on the button ( <mx:Button) When the button is clicked it returns that error that I posted above! Is Module the root tag of MyObject.mxml? if so, you are not using it as a module instead you are trying to use it as a custom component. For more info on modules, take a look at if it is actually a custom component, post the full code for MyObject.mxml - so we can look. Thanks, Gaurav Jain Flex SDK Team I am not sure what you mean by the root tag... but... I did create a custom component, and I wanted to know how I can execute a function that is embeded on that component. Nevermind... I figured out what my issue was.... It wasn't even my code. Sorry for wasting everyones time. Spread some love; what was the issue? If we know what the problem was, we can help the next person with a similar issue.
https://forums.adobe.com/thread/440997
CC-MAIN-2017-30
refinedweb
292
67.55
Oscommerce admin access deniedJobs Hey guys, We need somebody who can create contents for our SMM website. Our website is ready with design & all we need to re-write some contents o...the content. Don't bargain with me. You must write the Word "SMM500". I will only give this job if you write the word on your job letter. Otherwise, your proposal will be denied. Keep bidding, Regards! Hey, you need to import a wordpress mysql database - now we get the error "#1044 - Access denied for user' ". I'll send you the database file and give you access to phpmyadmin. Thank you, Pierre Hello, I am using .net blog ([log ind for at se URL]). I have put the engine in sub category [log ind for at se. ...the command line on Matlab like this: 1- the camera start recording system(‘start [log ind for at se URL] [log ind for at se URL] pi 123456’) 2- but if I made it with variables, I get a « assess denied » system(sprintf(‘start [log ind for at se URL] %s %s %s’,MyIp,MyLogin,MyPassword)) I tried lotof thing but I don’t know why it doesn&... ...but will be available for any research of cv candidate management and sourcing: manage candidate pipeline: inbox, screening, call interview, person’s interview, confirmed, denied add notes to each profile cv modifications for company’s needs VMS: For operations personnel at staffing agencies, a closed job order signals that work is just beginning: I have a gitlab runner running on my server but it's not working when I try to use mysql image. I keep getting the following error: ContainerStart: Error response fro.../runner-237f18d2-project-30-concurrent-0-mysql-0-wait-for-service/service mysqld: error while loading shared libraries: libpthread.so.0: cannot stat shared object: Permission denied i could not access to my digital digitalocean server through my ssh. the terminal showing me an error message says Permission denied (publickey). can any one help via AnyDesk. ...is wriien in C I guess We have around 20 different files to change. The total firmware combines the BLE chip with an RFID chip to read cards and to give and denied access. It is part of an access control. After re-coding, we need to fix some bugs. Hopefully this task is easier in the new code language. Right now, the chip is crashing sometimes and the I have created a small website justgotnew(dot)com on laravel freamwork. Create pin is working from website. But i don't have pinterest developer app approval for publ...a small website justgotnew(dot)com on laravel freamwork. Create pin is working from website. But i don't have pinterest developer app approval for public use. Pinterest has denied app.. .. the demo Vær venlig at Tilmelde dig eller Log ind for at se detaljer. ...reports and custom workflow systems that you have developed previously web based that help providers access their claims, authorizations and submitting them. This includes workflow of authorizing auth to be submitted reviewed by agency and approved or denied. This workflow includes providers being able to submit claims review claim status their eligibility Vær venlig at Tilmelde dig eller Log ind for at se detaljer. Need AWS who can make my ssh key (.ppk file ) . password protected of root file i was doing and getting the error and i don't have password even. -bash: /etc/ssh/sshd_conf...can make my ssh key (.ppk file ) . password protected of root file i was doing and getting the error and i don't have password even. -bash: /etc/ssh/sshd_config: Permission denied ...through guarantee the access permission in a smart home environment for users and things, ensuring entity identification supportive architecture, and Embedded secure and authentic connection of heterogeneous devices and users. using IRIS print scanner. (Comparison between User Iris print and Preserved Iris code; if Matching then Access to data set is Granted Need someone to build me a really easy website, I dont have time for it. I need you to recreate the following website exactly. [log ind for at se URL] I want everything o...to me, I will pay you and it will conclude the job. If you have any questions you may ask me. The budget for a simple project like this is $15. Anything higher will be denied. I have small php function that send https request to server and get html, for some reason i'm getting access denied, don't know if it's because user agent or something else.. budget is $10, don't bid more.. .. be demonstrated Could not connect to the database Access denied for user 'nullgphi_wp49280'@'localhost' (using password: YES) The password and the database are correct I'm having an issue with the fasthttp library with golang. The request is returning access denied however, if I call the request via an API tester or simply somewhere else then it works fine. I will screenshare via [log ind for at se URL] and allow control if need be. ...issue. Basically, we believe this is an error caused by calling to the old MYSQL database. The website is down right now. This is the error: ERROR: SQLSTATE[HY000] [1045] Access denied for user 'openhous'@'localhost' (using password: YES) Fatal error: Call to a member function prepare() on null in /home/openhous/public_html/application/models/[log ind for at ... ...deployed on Heroku. So we have the same code I assume running on ours devices. I assume there is something wrong with the recording permission which then cause a permission denied bug. (see the attached file) I am looking for an experienced React developer who can fix this issue asap. We can discuss this job in more details during the interview process Hi, Need urgent to fix 2 things in prestashop 1.7: 1. Even there is a plugin activated and set up to pay ...prestashop 1.7: 1. Even there is a plugin activated and set up to pay products with credit card, users can't pay with credit card, that option doesn't appear. 2. In admin, the Orders tab is access denied, need to fix that. Thanks in advance create a wordpress plugin for tokbox 1- video,.. (in admin panel) Hi, Its a small 10 minute task and if you expect more than 1.. ... SSH Issue in Google Cloud - Compute Engine - Ubuntu 18.0.4 In compute engine i cant connect ssh using the browser method, i tried to ...- Compute Engine - Ubuntu 18.0.4 In compute engine i cant connect ssh using the browser method, i tried to setup ssh key and connect using terminal but it give permission denied. server was connected to [log ind for at se URL] Im looking to start a Google adsense account to create more revenue i tried to do it my self but got denied are you able to help set it up and get it up and running Hi! I am looking for someone who have great knowledge of rsync. I have just 1-2 command issue which need resolved. I am getting permission denied error. Fixed budget 200 INR and more work in future. If you have read and ready to do the task, Please bid with word called "rsync" so I can talk and hire you. Thanks ...to accept the potential users. Allow new users to create/submit a new dating account which requires approval prior to enter the dating app portfolio. If the user account is denied the user is unable to join the dating portfolio. Requirements to join as a new member with Verification prior to approval. Allow picture and video uploads once new user is ...filters, as well as modeling and simulation of inertial measurement units and GPS receivers. • Research and/or applied experience in advanced navigation system design for GPS-denied or GPS degraded conditions • Familiarity with embedded software development and hardware/software integration Culture fit: • Self¬-directed: We work best with people that Hi Gurukripa Singh C.,Hope all is well with you, my website is playing up. For some reason the maps now does not .. site is [log ind for at ind for at se URL] file so that isnt the issue. I now give up. I will not pay more than $20 for this job as I know its easy. ... Warning: mysqli_connect(): (28000/1045): Access denied for user 'database'@'localhost' (using password: YES) in /home/localhost/[log ind for at se URL] on line 136 Sorry record not found .. ... ...onerror([log ind for at se URL], fullname, sys.exc_info()) File "C:ProgramDataAnaconda3lib[log ind for at se URL]", line 389, in _rmtree_unsafe [log ind for at se URL](fullname) PermissionError: [WinError 5] Access is denied: 'C:aaaINPUT20180425__=mc50111-280AVF_INFO[log ind for at se URL]' (base) C:aaa>A so-far working (on Windows 7)... ...Authentication server API. 4- If the user is correctly authenticated and the token is valid then will gain access. If not then access will not be granted at Web APP. 5- Of-course if a user tried to directly access Web APP his/her request should be denied. Let me know if you have previous experience implementing similar solutions and if you have any questions I need to fix the following error on our ecommerce website urgently: Warning: f...following error on our ecommerce website urgently: Warning: file_put_contents(/home/www/public_html/var/cache//mage-tags/mage---81c_MAGE): failed to open stream: Permission denied in /home/www/public_html/vendor/colinmollenhour/cache-backend-file/[log ind for at se URL] on line 663 Hi, I want to fix 3 problems in prestashop: 1. Mobile menu doesn't...mobile menu (now doesn't appears). Attached image "[log ind for at se URL]" 3. Even logged in with superadmin permissions, i can't go to Shop parameters -> Contact -> Stores. "Acces denied" Seems have to fix permissions in phpmyadmin? Attached image "[log ind for at se URL]&q... I need QB Online Accountant , Book Keeper and Pro Connect Tax Preparer for my ...there is client confidential data, must have proven USA experience, certifications, background checks etc. If you are South Asian, based in USA, without EAD/work visa/H1B/Denied/ Refusal / Renewal etc, please contact ASAP. Please DONT QUOTE for more than $2/hr. Thanks Getting this error when I try to run website There is a duplicate '[log ind for at se URL]' sectio...type="[log ind for at se URL], [log ind for at se URL], Version=1.0.61025.0, Culture=neutral, Public When I don't get this error, I get an access denied error based on the location. for each ...query to [log ind for at se URL] with the card information and amount. It should then take the return (approved or denied) and store it in the batch log. When it's approved, it should tag that item in the batch as "Completed". If one card in the batch is denied it should note that the batch is not complete. The user can then update the card on file for the custo... ...When customer clicks in Accept+30 email: Accept: The owner gets info that reservation with new time has been accepted. Deny: The owner gets info that the reservation is denied. Backend: The webpage does need a backend-page with login, where the owner can set blackout-dates (dates on which no reservation is possible and also to set the close-day ...group - the creation of the Key "KeyName" with access granted only to the user "UserName" - the Key "KeyName" have to be updated with the value "new value" by the function UpdateRegKey() The result is just this but this have to be done impersonating the created user and only that user can have access to that Key. There are only a few funct.... .../////////////////////////////////////// Please fix my email issues .............................................................................. ........... Warning: mysql_connect(): Access denied for user 'globbiz_tkt'@'localhost' (using password: YES) in /home/globbiz/public_html/marilyn_email/[log ind for at se URL] on line 5 Unable to select database FI... need an error fixed in my email program I will upload thefile and the error message to you ERROR Warning: mysql_connect(): Access denied for user 'globbiz_tkt'@'localhost' (using password: YES) in /home/globbiz/public_html/marilyn_email/[log ind for at se URL] on line 5 Unable to select database FILEIS UPLOADED
https://www.dk.freelancer.com/job-search/oscommerce-admin-access-denied/
CC-MAIN-2019-09
refinedweb
2,063
64.2
table of contents other versions - buster 4.16-2 - buster-backports 5.04-1~bpo10+1 - testing 5.10-1 - unstable 5.10-1 NAME¶s390_pci_mmio_write, s390_pci_mmio_read - transfer data to/from PCI MMIO memory page SYNOPSIS¶ #include <asm/unistd.h> int s390_pci_mmio_write(unsigned long mmio_addr, void *user_buffer, size_t length); int s390_pci_mmio_read(unsigned long mmio_addr, void *user_buffer, size_t length); DESCRIPTION¶The¶On success, s390_pci_mmio_write() and s390_pci_mmio_read() return 0. On error, -1 is returned and errno is set to one of the error codes listed below. ERRORS¶ - EFAULT - The address in mmio_addr is invalid. - EFAULT - user_buffer does not point to a valid location in the caller's address space. - EINVAL - Invalid length argument. - ENODEV - PCI support is not enabled. - ENOMEM - Insufficient memory.
https://manpages.debian.org/buster/manpages-dev/s390_pci_mmio_read.2
CC-MAIN-2022-21
refinedweb
120
54.59
Hi guys, I am trying to write a program that takes in a file, which has the last name first name and grade of a student. I then need to sort the grades from highest score to lowest with the corresponding names and then calculate the average grade. I have successfully opened the file and calculated the average but cannot figure out the sorting. This is a homework assignment so I would really just like pointers on how to do this. Thank you for the help in advance! #include "stdio.h" #include "stdlib.h" #include "string.h" main(){ FILE *data; char line[35]; char grade[35]; const int a = 34; int i, j, l, t; float average, sum, g; data = fopen("/home/varnes/hw06/hw6_grades.txt", "r"); if ( data == NULL ){ printf("Error: Cannot open file\n"); exit(1); } for ( j = 0; j < a; j++){ while (fgets (line, sizeof(line), data) != NULL){ printf(line); for ( l = 0; l < 1; l++){ strncpy(grade, line+21, 26); g = atof(grade); sum += g; } for(i = 0; i < 33; i++){ if((grade[i]) > (grade[i+1])){ t = grade[i]; grade[i] = grade[i+1]; grade[i+1] = t; } } } printf("\n%c ", grade[i]); } average = sum/a; printf("\nSum is: %f", sum); printf("\nAverage grade: %f\n", average); }
https://www.daniweb.com/programming/software-development/threads/435508/sorting-and-reading-text-from-file
CC-MAIN-2022-27
refinedweb
212
78.99
XML Offload and Acceleration with Cell Broadband Engine - , , ,, , , - , ,, , Abstract XML processing is becoming performance critical for an increasingly wide range of applications, occupying up to 90% of the CPU time. Novel approaches to XML processing can thus have a decisive impact on market competitiveness. Inspired by a novel state-machine based approach to XML processing, we have developed a high-performance XML parser that runs efficiently both on Intel processors and the new Cell Broadband Engine processor. It offers an industry-standard Simple API for XML interface and is, on an Intel Pentium M processor, 2.3 times as fast as libxml although it offers the same interface. With eight parallel parse threads running on a single Cell Broadband Engine processor, the total parse throughput is estimated to be 3.0 times that of libxml on an Intel Pentium M processor. At this point, our parser processes XML 1.0 documents encoded in UTF-8, but can easily be extended to support XML 1.1 and other encodings. Introduction The high-performance XML parser described in this paper is part of the system architecture for XML offload and acceleration presented in Cell Processor-Based Workstation for XML Offload System Architecture and Design [1] and System Architecture for XML Offload to a Cell Processor-Based Workstation [2]. This system architecture aims to exploit the novel Cell Broadband Engine (Cell BE) processor architecture for XML processing and consists of two elements: a basic offload infrastructure that delegates XML processing tasks from Java-based application systems to designated offload systems, and a high-performance XML parser on these offload systems. While our paper at the XML 2005 Conference [2] focused on the basic offload infrastructure and the technology underlying the XML accelerator parser, this paper gives more details about the parsers design and implementation. The challenge of parsing XML XML is a World Wide Web Consortium (W3C) recommended document format [4], [5] and widely used even in places that it was not originally designed for, e.g., inter-process communication using the Simple Object Access Protocol (SOAP) [5]. Unfortunately, XML parsing is inherently inefficient a problem that is ultimately due to XMLs design goal of being human-readable and that coupled with the wide use of XML generates a bottleneck in an increasing number of applications. It is not unusual for an application to spend 30% of its time just parsing XML, and the situation is much worse for some database applications that process very large XML files. Cell BE as an XML accelerator Our project goal was to employ the unmatched cost/performance ratio of the new Cell BE processor architecture implemented in the new Cell BE processor to offer XML parsing services on a dedicated processor and thus free up resources on more costly computer systems, e.g., mainframe computers. The Cell BE processor features nine cores on a single chip: one Power Processor Element (PPE) running the operating system and eight independent Synergistic Processing Elements (SPEs) running the application threads [6]. Figure 1 illustrates this structure. While the PPE is an almost fully-fledged IBM Power Architecture processor, the SPEs are in essence vector units optimized for Single Instruction Multiple Data (SIMD) operations such as parallel arithmetics: For example, each SPE can execute eight 16bit-additions in parallel in a single instruction. The SPEs instruction set supports the execution of self-contained programs, however due to the size restrictions (since the Cell BE processor has nine cores on a single chip, each core has a limited chip area and a limited number of transistors at its disposal) the SPEs do not offer advanced control structures such as branch prediction, out-of-order speculative execution, or multiple instruction issue. Figure 1: Structure of the Cell BE processor As XML parsing is inherently serial in nature, it appears practically infeasible to make use of all eight SPEs in a single parse thread. Instead, our approach is to parse eight documents in parallel, which is just as good in many applications, especially in an offload environment. Low-memory state-machine approach Each of the Cell BE processors eight SPEs has only 256KBytes local store, a fast memory in SRAM technology. The only access to the much larger main memory is through Direct Memory Access (DMA), an asynchronous protocol for memory transfers. Due to the latency of DMA requests the program itself and all immediately accessible data must reside in the local store. The Zrich XML Accelerator (ZuXA) state machine approach developed by IBM researchers features a very low memory footprint and is thus ideally suited for the limited memory available on the Cell BE processor. Cell BE XML parser architecture Our approach to employ the parallelism of the Cell BE processor architecture is to use each SPE as a stand-alone parsing engine; thus eight XML documents can be parsed in parallel. This approach assumes, of course, that there are always enough parsing requests at a time in order to use the Cell BE processors full potential. As the parser is targeted at a dedicated XML offload environment, this should not be an issue in practice. The parser consists of a front-end, which is located on the PPE core, and eight independent SPE parsing units, that form the parsers back-end. Figure 2 shows this design. Figure 2: Design of the parser (SAXy) The front-end acts as a controller that delegates incoming parsing requests to idle SPE parsing units. This mechanism is implemented by a job stack onto which incoming parse requests are pushed. Each SPE thread has its corresponding PPE thread that is responsible for fetching an outstanding parse job from the job stack, controlling data streaming back and forth between PPE and SPE through DMA operations, and invoking Simple API for XML (SAX) event callbacks on the registered parsing client functions. SPE and PPE communicate through the Cell BE processors intrinsic mailbox mechanism. As stated before, XML parsing is done by a program loaded onto each SPE. The parsing process itself can be seen as a transformation from plain XML into a normalized internal stream of binary records each representing a SAX event. Because these binary event streams directly encode SAX events, the controller threads on the PPE can efficiently extract the according arguments and trigger SAX events by calling the appropriate registered callback functions. The SAX interface provided by the front-end makes it easy to integrate our XML parser with new as well as existing applications. If so desired, reading the binary event stream directly would eliminate the overhead of creating SAX events and thus might offer a slight performance advantage. Another important fact is that we developed our parser both on the Cell BE platform and a conventional single processor platform. Front-end and back-end parts of our parser can be compiled into a single executable by setting a compiler flag. This allows our parser to be employed even beyond the Cell BE platform it was targeted at in the first place. We were surprised about how well our parser performed on a conventional processor. For more details on our performance metrics see the section Performance measurements. The next section will give a brief introduction into the distinctive challenges of XML parsing on the Cell BE processor and the parsing algorithm we implemented. XML-parsing state machine As mentioned before, ZuXA is a state-machine based approach to XML processing. It has a very small memory footprint and is thus ideally suited for the Cell BE processors SPEs, each of which has only 256KBytes of local memory. It is based on a novel finite state machine (FSM) technology that is described in [2], [7] and [8], and consists of: A set of states. A collection of internal data structures such as a state stack and storage area for namespace declarations, element names, and attributes. A set of transitions between the states, which take the state machine from one state to another when they fire. The firing of each transition depends on various aspects of the state machines state and on the input character, and upon firing a transition executes a sequence of instructions from a predefined instruction set. Thus the state machine can be modeled as a graph with one node for each state, and directed arcs representing the transitions. Each arc is then labeled with the conditions and instructions associated with that transition. Having internal data structures such as stacks renders the state machine much more powerful than a traditional finite state machine a normal FSM could not detect a^nb^n and similar languages, which can easily be proved by the Pumping Lemma, described for example on, let alone parse XML. A refined set of states and transitions has been developed to implement the logic of XML parsing as described by [3] respectively [4]. The resulting state diagram is specified in an abstract format from which central parts of the parsers source code, namely rule selection and instruction handling, are generated in an automated fashion. Compiling the state machine Only the main loop and the instruction set implementation along with the data structures they work on have been implemented by hand. The remaining parser code was generated from an abstract specification of the parser using a short Perl script. Internal data structures As said before, a number of basic data structures in the parser have been written by hand. Among these are: a memory containing the keywords that are defined by the W3Cs XML recommendation [3], e.g., DOCTYPE and CDATA; a stack of strings that stores the element names as the FSM descends into deeper nesting levels of the document structure; a table containing the names of attributes, their namespace prefixes as well as the namespace Uniform Resource Identifiers (URIs); and a tree storing the namespace declarations as they occur. Performance tuning We employed several techniques for speeding up the parsing process. Most of these are generic and speed up parsing on any processor, i.e., they are not specific to the Cell BE processor. Grouping transitions into buckets From each state originates a set of up to seven state transitions, each consisting of a destination state, a condition, and an associated sequence (or bundle) of instructions. The transitions originating from a state are given in decreasing order of priority, i.e., the first transition in this sequence whose condition matches will fire. To avoid having to check up to seven conditions until we find the first matching one, we divide the transitions originating from a state into 16 groups, or buckets. These groups overlap, i.e., a transition can occur in more than one or even in all 16 buckets. The buckets are indexed by four bits from the UTF-8 input characters first byte. Although a state can contain up to seven outgoing transitions, through our careful selection of these bits we can guarantee that each bucket holds at most three transitions. A transition is then in all buckets for which the four index bits being part of the input have values under which it could potentially fire. Thus, e.g., a transition matching only one specific character such as < will be in only one bucket, while instructions whose conditions do not depend on the input will end up in all 16 buckets. When we then read an input character, from the state number and four bits out of this input character we can calculate the number of the corresponding bucket. This bucket then contains a reduced set of transitions that can potentially fire up to three, compared to the maximum of seven rules in a state. Compiling rule selection logic We use seven bits to identify a state. These seven bits together with the four bits from the input can take 2^11 = 2048 values, hence there are 2048 buckets in total. Many of the 2048 buckets are identical, i.e., they contain the same conditional instruction bundles (CIB). The CIB is defined as the sequence of state transitions without the destination states, i.e., a sequence of (up to three) pairs of a condition and a sequence of instructions. We assign an unique identifier to each CIB and then map each of the 2048 buckets to one of the CIBs. We have to keep a separate array of size 2048 * 3 for the destination states, because there are up to three transitions per bucket and because the CIBs do not contain the destination state. The small number of CIBs then allows us to compile the rule selection within each CIB directly into C code. Remember that the transitions for a state (and thus within a bucket) are sorted in decreasing order of priority and we have to identify and execute the first transition whose condition matches. We thus generate a conditional instruction bundle handler that contains a large switch statement. Each case of the switch statement corresponds to one CIB and contains up to three transitions. Every bucket that maps to a specific CIB thus contains these transitions in the given order, even if they may point to different destination states. Using the index of the transition that fired, the corresponding destination state for each of the 2048 buckets is looked up in a 2048 * 3 array. Efficient selection of transition rules One of the most performance critical steps is the selection of the state machine transition that will fire depending on the current internal state (the state number and the internal data structures) and input character, as this step is in the main loop and has to be executed for every single character. (Actually, at some points, e.g., within content, where several cycles are spent in the same state because a transition loops back onto it, it is possible to extend the chunk of input processed at once by the state machine beyond a single character.) Limitations and caveats Given the relatively short time in which the parser was developed, it is obvious that some features had to be neglected in order to provide a fast and stable implementation. This chapter is to describe these limitations and enable the reader to judge the impact on performance metrics and effort to be undertaken to overcome them. At this point, our parser can only handle XML 1.0 documents encoded in UTF-8. A further limitation is that our parser does not parse internal (or external) DTDs but skips them instead. Given that, it is clear that no validity checking can be performed. Skipping a DTD has some impact on well-formedness checking, too. Documents that contain DTDs and make use of parameter entities might be wrongly considered well-formed, as entity references cannot be resolved by our parser. Further minor limitations are: event stream records can not exceed four KBytes, and multi-byte characters that do not conform to UTF-8 might remain undetected. We think that all these limitations, apart from the validation issue, can be overcome with some very justifiable effort. However, as the parser already uses most of the SPEs 256KBytes of local store, we see no way to add validation to the features list, as far as the Cell BE processor is concerned. As various parts of the implementation rely on UTF-8 as character encoding, the most straightforward way to overcome this limitation would be to introduce a character encoding transformation layer. This would of course imply some penalty on the end-to-end latency. We think that DTD/Schema validation and support for different character encodings are the only features that could negatively impact performance. Support for XML 1.1 It should be possible with not too much work to extend the parser in such a way that it supports XML 1.1. The main difference between XML 1.0 and 1.1 is that some character classes have changed, and thus of course the character classifier has to be changed accordingly. Classification during the execution of the state machine is handled by the character classifier which is generated by a Perl script, thus the latter has to be extended to respect the new XML 1.1 character classes. Remember that a rule has to be in a bucket if it is at all possible that this rule will fire if the selected four bits of the first input byte have the correct value. Thus if new characters were to be added to a class, the selection might have to change. If the parser was to support XML 1.0 and 1.1, a rule would have to be in a bucket if it is possible to fire in that bucket for XML 1.0 or for 1.1, because the bucketing is done even before compile time, and thus long before we know what kind of document we are to process. Performance measurements The following performance measurements that were performed under Linux compare our parser with the libxml parser [9], a widely used validating SAX parser. The benchmark workload consisted of ten files taken from a collection of XML documents from IBM DB2 customers and ranging from two KBytes to multiple MBytes. During each benchmark run, the respective parser was called a few hundred times, each time parsing a document one hundred times in a row. Figure 3 shows the average results of these benchmark runs using a normalized scale. On an Intel Pentium M processor, our parser (SAXy) is 2.3 times as fast as libxml. On a Cell BE processor, with eight independent parse threads running in parallel, the total parse throughput is estimated to be 3.0 times that of libxml on an Intel Pentium M processor. Figure 3: Results of the performance measurements Summary Inspired by a novel finite state machine technology, we were able to successfully implement a non-validating high-performance XML parser that fits into the limited local store of the Cell BE processors SPEs. The parser consists of a front-end that provides a SAX application interface, and eight back-end parse threads on the SPEs. Based on the hand-written implementation of the main loop, the data structures, and the instruction set, we developed a process to automatically generate the remaining parser code from a given abstract specification. Currently, our parser is restricted to XML 1.0 documents encoded in UTF-8 and does not parse DTDs. Both limitations could be resolved, but would impose a certain penalty on the overall performance. Validation does not seem to be feasible because of the SPEs limited memory. After multiple iterations of performance tuning, our parser is 2.3 times as fast as libxml on an Intel Pentium M processor and, with eight parallel parse threads on a Cell BE processor, is expected to achieve 3.0 times the throughput of libxml on an Intel Pentium M processor. Acknowledgements We thank Jan van Lunteren for his continuous contributions to our project. We thank Joseph Bostian for his valuable input and support. Trademark notices IBM, Power Architecture, and DB2 are trademarks of International Business Machines in the United-States, other countries, or both. Cell Broadband Engine is a trademark of Sony Computer Entertainment Inc. Intel and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others. Bibliography [1] S. Letz, Cell Processor-Based Workstation for XML Offload System Architecture and Design, University of Leipzig, Department of Computer Science, Leipzig, Germany, May 2005. [2] S. Letz, R. Seiffert, J. van Lunteren, and P. Herrmann, System Architecture for XML Offload to a Cell Processor-Based Workstation, Proceedings of the XML 2005 Conference (XML2005), Atlanta, GA, USA, November 2005. [3] Extensible Markup Language (XML) 1.0, W3C Recommendation,. [4] Extensible Markup Language (XML) 1.1, W3C Recommendation,. [5] Simple Object Access Protocol (SOAP) 1.2, W3C Recommendation,. [6] D. Pham et al., The Design and Implementation of a First-Generation CELL Processor, Proceedings of the 2005 IEEE International Solid State Circuits Conference (ISSCC), San Francisco, CA, USA, February 2005. [7] J. van Lunteren and A.P.J. Engbersen, A High-Performance Pattern-Matching Engine for Intrusion Detection, Hot Chips 17, Stanford University, Palo Alto, CA, USA, August 2005. [8] J. van Lunteren and A.P.J. Engbersen, XML Accelerator Engine, First International Workshop on High Performance XML Processing, in conjunction with the 13th International World Wide Web Conference (WWW2004), New York, NY, USA, May 2004. [9] libxml The XML C parser and toolkit for Gnome,.
http://web.archive.org/web/20060813234634/http:/xtech06.usefulinc.com/schedule/paper/27
CC-MAIN-2017-22
refinedweb
3,455
51.18
Frequently Asked Questions Here are some answers to frequently asked questions. If you have a question, you can ask it by opening an issue on the Storybook Repository. How can I opt-out of Angular Ivy? In case you are having trouble with Angular Ivy you can deactivate it in your main.js: Please report any issues related to Ivy in our GitHub Issue Tracker as the support for View Engine will be dropped in a future release of Angular. How can I run coverage tests with Create React App and leave out stories? Create React App does not allow providing options to Jest in your package.json, however you can run jest with commandline arguments: I see ReferenceError: React is not defined when using Storybook with Next.js Next automatically defines React for all of your files via a babel plugin. In Storybook, you can solve this either by: - Adding import React from 'react'to your component files. - Adding a .babelrcthat includes babel-plugin-react-require How do I setup Storybook to share Webpack configuration with Next.js? You can generally reuse webpack rules by placing them in a file that is require()-ed from both your next.config.js and your .storybook/main.js files. For example: How do I setup React Fast Refresh with Storybook? Fast refresh is an opt-in feature that can be used in Storybook React. There are two ways that you can enable it, go ahead and pick one: - You can set a FAST_REFRESHenvironment variable in your .envfile: - Or you can set the following properties in your .storybook/main.jsfiles: How do I setup the new React Context Root API with Storybook? If your installed React Version equals or is higher than 18.0.0, the new React Root API is automatically used and the newest React concurrent features can be used. You can opt-out from the new React Root API by setting the following property in your .storybook/main.js file: Why is there no addons channel? A common error is that an addon tries to access the "channel", but the channel is not set. It can happen in a few different cases: You're trying to access addon channel (e.g., by calling setOptions) in a non-browser environment like Jest. You may need to add a channel mock: In React Native, it's a special case documented in #1192 Why aren't Controls visible in the Canvas panel but visible in the Docs panel? If you're adding Storybook's dependencies manually, make sure you include the @storybook/addon-controls dependency in your project and reference it in your .storybook/main.js as follows: Why aren't the addons working in a composed Storybook? Composition is a new feature that we released with version 6.0, and there are still some limitations to it. For now, the addons you're using in a composed Storybook will not work. We're working on overcoming this limitation, and soon you'll be able to use them as if you are working with a non-composed Storybook. Which community addons are compatible with the latest version of Storybook? Starting with Storybook version 6.0, we've introduced some great features aimed at streamlining your development workflow. With this, we would like to point out that if you plan on using addons created by our fantastic community, you need to consider that some of those addons might be working with an outdated version of Storybook. We're actively working to provide a better way to address this situation, but in the meantime, we would ask for a bit of caution on your end so that you don't run into unexpected problems. Let us know by creating an issue in the Storybook repo so that we can gather information and create a curated list with those addons to help not only you but the rest of the community. Is it possible to browse the documentation for past versions of Storybook? With the release of version 6.0, we updated our documentation as well. That doesn't mean that the old documentation was removed. We kept it to help you with your Storybook migration process. Use the content from the table below in conjunction with our migration guide . We're only covering versions 5.3 and 5.0 as they were important milestones for Storybook. If you want to go back in time a little more, you'll have to check the specific release in the monorepo. storiesOfformat has been removed. For the time being, we're still supporting it, and we have documentation for it. But be advised that this is bound to change in the future. What icons are available for my toolbar or my addon? With the @storybook/components package, you get a set of icons that you can use to customize your UI. Use the table below as a reference while writing your addon or defining your Storybook global types. Go through this story to see how the icons look. I see a "No Preview" error with a Storybook production build If you're using the serve package to verify your production build of Storybook, you'll get that error. It relates to how serve handles rewrites. For instance, /iframe.html is rewritten into /iframe, and you'll get that error. We recommend that you use http-server instead and use the following command to preview Storybook: http-serveras a development dependency and create a new script to preview your production build of Storybook. Can I use Storybook with Vue 3? Yes, with the release of version 6.2, Storybook now includes support for Vue 3. See the install page for instructions. Is snapshot testing with Storyshots supported for Vue 3? Yes, with the release of version 6.2, the Storyshots addon will automatically detect Vue 3 projects. If you run into a situation where this is not the case, you can adjust the config object and manually specify the framework (e.g., vue3). See our documentation on how to customize the Storyshots configuration. Why are my MDX stories not working in IE11? Currently there's an issue when using MDX stories with IE11. This issue does not apply to DocsPage. If you're interested in helping us fix this issue, read our Contribution guidelines and submit a pull request. Why aren't my code blocks highlighted with Storybook MDX Out of the box, Storybook provides syntax highlighting for a set of languages (e.g., Javascript, Markdown, CSS, HTML, Typescript, GraphQL) that you can use with your code blocks. If you're writing your custom code blocks with MDX, you'll need to import the syntax highlighter manually. For example, if you're adding a code block for SCSS, adjust your story to the following: react-syntax-highlighter's documentation for a list of available languages. Applying this small change will enable you to add syntax highlight for SCSS or any other language available. Why aren't my MDX 2 stories working in Storybook? MDX 2 introduced some changes to how the code is rendered. For example, if you enabled it in your Storybook and you have the following code block: You'll need to update it to make it compatible with MDX 2. See the following issue for more information. Why can't I import my own stories into MDX 2? This is a known issue with MDX 2. We're working to fix it. For now you can apply the following workaround: Why are my mocked GraphQL queries failing with Storybook's MSW addon? If you're working with Vue 3, you'll need to install @vue/apollo-composable. With Svelte, you'll need to install @rollup/plugin-replace and update your rollup.config file to the following: With Angular, the most common issue is the placement of the mockServiceWorker.js file. Use this example as a point of reference. Can I use other GraphQL providers with Storybook's MSW addon? Yes, check the addon's examples to learn how to integrate different providers. Can I mock GraphQL mutations with Storybook's MSW addon? No, currently, the MSW addon only has support for GraphQL queries. If you're interested in including this feature, open an issue in the MSW addon repository and follow up with the maintainer. How can my code detect if it is running in Storybook? You can do this by checking for the IS_STORYBOOK global variable, which will equal true when running in Storybook. The environment variable process.env.STORYBOOK is also set to true. Why are my stories not showing up correctly when using certain characters? Storybook allows you to use most characters while naming your stories. Still, specific characters (e.g., #) can lead to issues when Storybook generates the internal identifier for the story, leading to collisions and incorrectly outputting the correct story. We recommend using such characters sparsely. Why are the TypeScript examples and documentation using as for type safety? We're aware that the default Typescript story construct might seem outdated and could potentially introduce a less than ideal way of handling type safety and strictness and could be rewritten as such: Although valid, it introduces additional boilerplate code to the story definition. Instead, we're working towards implementing a safer mechanism based on what's currently being discussed in the following issue. Once the feature is released, we'll migrate our existing examples and documentation accordingly. Why is Storybook's source loader returning undefined with curried functions? This is a known issue with Storybook. If you're interested in getting it fixed, open an issue with a working reproduction so that it can be triaged and fixed in future releases. Why are my args no longer displaying the default values? Before version 6.3, unset args were set to the argTypes.defaultValue if specified or inferred from the component's properties (e.g., React's prop types, Angular inputs, Vue props). Starting with version 6.3, Storybook no longer infers default values but instead defines the arg's value as undefined when unset, allowing the framework to supply its default value. If you are using argTypes.defaultValue to fix the above, you no longer need to, and you can safely remove it from your stories. Additionally, suppose you were using argTypes.defaultValue or relying on inference to set a default value for an arg. In that case, you should define the arg's value at the component level instead: For Storybook's Docs, you can manually configure the displayed value by configuring the table.defaultValue setting: Why isn't Storybook's test runner working? There's an issue with Storybook's test runner and the latest version of Jest (i.e., version 28), which prevents it from running effectively. As a workaround, you can downgrade Jest to the previous stable version (i.e., version 27), and you'll be able to run it. See the following issue for more information. How does Storybook handles enviroment variables? Storybook has built-in support for environment variables. By default, environment variables are only available in Node.js code and are not available in the browser as some variables should be kept secret (e.g., API keys) and not exposed to anyone visiting the published Storybook. To expose a variable, you must preface its name with STORYBOOK_. So STORYBOOK_API_URL will be available in browser code but API_KEY will not. Additionally you can also customize which variables are exposed by setting the env field in the .storybook/main.js file. Variables are set when JavaScript is compiled so when the development server is started or you build your Storybook. Environment variable files should not be committed to Git as they often contain secrets which are not safe to add to Git. Instead, add .env.* to your .gitignore file and set up the environment variables manually on your hosting provider (e.g., GitHub).
https://storybook.js.org/docs/html/faq/
CC-MAIN-2022-33
refinedweb
1,987
57.98
#include <fstream> void clear( iostate flags = ios::goodbit ); The function clear() does two things: The flags argument defaults to ios::goodbit, which means that by default, all flags will be cleared and ios::goodbit will be set. For example, the following code uses the clear() function to reset the flags of an output file stream, after an attempt is made to read from that output stream: fstream outputFile( "output.txt", fstream::out ); // try to read from the output stream; this shouldn't work int val; outputFile >> val; if( outputFile.fail() ) { cout << "Error reading from the output stream" << endl; // reset the flags associated with the stream outputFile.clear(); } for( int i = 0; i < 10; i++ ) { outputFile << i << " "; } outputFile << endl;
http://www.cppreference.com/cppio/clear.html
crawl-001
refinedweb
118
66.07
Language Instinctsby Jon Udell September 17, 2003 Back in April I made the case for writing weblog entries in XHTML, using CSS for a dual purpose: to control presentation and as hooks for structured search. I then started to accumulate well-formed content, writing CSS class attributes with an eye toward data mining, and flowing XHTML content through my RSS feed. Here's a recap of the basic elements of the plan sketched out in my June column: The backstory is as follows. I'd noticed that other bloggers had begun to develop an informal convention -- they were using the term "minireview" to identify items that were (or that contained) brief reviews of products. A minireview might be an entire weblog item or just a paragraph within an item. In the monkey-see, monkey-do tradition of the Web, I decided to imitate this behavior. I also wanted to expand its scope. The long-term goal would be to enable me (or anyone) to identity these kinds of elements in a way that would facilitate intelligent search and recombination. But that would require writers to categorize their material, and we all know that's a non-starter. The absence of a universal taxonomy is the least of our problems. Even if such a thing existed (or could exist), we'd be loath to apply it because we are lazy creatures of habit. We invest effort expecting immediate return, not some distant future reward. What would motivate somebody to tag a chunk of content? It struck me that people care intensely about appearances, self-presentation, and social conformity. Look at the carefully handcrafted arrangements of links on blogrolls -- some ordered by ascending width, some undulating like candlesticks. We do these things despite our inherent laziness because we have seen others do them, because we want to express solidarity with the tribe, and because we hope to be trend-setters, not just trend-followers. Maybe we can leverage the machinery of meme propagation to achieve some semantic enrichment of the Web. Start with visual effects that people can easily create and that other people will want to copy. Tie those effects to tags that can also provide structural hooks. Then exploit the hooks. RSS, XHTML, and XML databases In the original plan, RSS was the conduit through which the enhanced content would flow. If a meme did propagate, search services that compiled the XHTML content of blog items into their databases could aggregate along this new axis, thus amplifying the effect. I still envision that scenario, but I'm as much a seeker of instant gratification as the next person, and I wanted immediate use of my own enhanced content. So I extracted the XHTML content I'd been accumulating in my Radio UserLand database, stuck it in a file, and put together a JavaScript/XSLT kit for searching it (1, 2, 3). And then a funny thing happened: the XML file took on a life of its own. For no particularly good reason, I'd decided to tag quotations like so: <p class="quotation" source="..."> Over on the Bitflux blog, Roger Fischer noted correctly that this was kind of silly. It unnecessarily invents a 'source' attribute that doesn't exist in XHTML, and that should therefore appear in another namespace. But in any case it's overkill because XHTML affords a natural solution: <blockquote cite="..."> I agreed with Roger, so I made the change in the XML file (it was just a simple XSLT transform), and made a corresponding change to the canned XPath query that finds quotations in my blog. My next instinct was to republish the affected items. But on second thought, why? In the HTML rendering of my blog, the two styles look the same. And the items had already fallen off the RSS event horizon. Republishing wouldn't cause them to appear in the feed. Even if it did, the purely structural changes would be invisible and thus puzzling to readers. This creates a slightly odd situation. The canonical version of my weblog is no longer the published one. Rather, it's an XML document-database the structure (but not content) of which is evolving and the API of which is XPath search. At some point I'll probably want to resynchronize the two, but for now I'm just interested to see where the experiment leads. From pidgin to creole After I posted the blog entries describing this approach, a number of people asked me to specify the tagging conventions I'm using or intend to use. There is no plan or specification. I'd be satisfied for now if people could routinely and easily create styled elements, associate those elements with CSS attributes, embed the CSS in well-formed content, usefully navigate and search the stuff, and easily adjust the tagging across their own content repositories. Meme propagation could and arguably should drive collective decisions about which kinds of elements to name and what to name them. In The Language Instinct, Steven Pinker describes the transition from pidgin to creole. A pidgin language, which arises when speakers with no native language in common are thrown together and must communicate, lacks a complete grammar. Amazingly, the children of pidgin speakers spontaneously create creole languages that are grammatically complete. It is perhaps a stretch to relate these processes to the evolution of modes of written communication on the Web. But even if you don't buy the whole analogy, it's worth thinking about how human communities can and do converge on naming conventions and then on a grammar. The process is intensely interactive. People imitate other people's ways of communicating, introducing variations that sometimes catch on and sometimes don't. I don't think the Semantic Web will come from a specification that tells us how to name and categorize everything. But it could arise, I suspect, from our linguistic instincts and from the social contexts that nurture them. If that's true, then we need to be able to Speak easily and naturally. The structural symbols we embed in our writing, when we write for the Web, have to be easy to understand and use. Style attributes strike me as the likely approach because while limited in scope, they're available and can be manipulated in familiar ways. Hear what we are saying. At first I was deaf to the structural language I was trying to speak. I'd invent a use for a CSS class attribute and apply it, in what I thought was a consistent way, but it was really just a promise to the future. Some day I'd get around to harvesting what grew from the seeds I was planting. But when I finally did, I found that my tagging conventions had drifted over time. When I closed the feedback loop on my own weblog's content, by making it available to structured search, I could finally hear -- and thus correct -- that drift. Imitate and be imitated. My search mechanism has some interesting properties. For example, the canned XPath queries on the form not only make XPath usable for those who don't grok it intuitively, they also advertise the structural hooks that are available. I think of this as an invitation to imitators. Of course I'm an imitator too. When I see a good idea -- for example, Roger Fischer's suggestion -- I want to copy it. Having the searchable content in one place, available to XSLT or even just find-and-replace, makes quick work of that. The dictionary of the Semantic Web may one day be written. But not until we've done a lot of yammering, a lot of listening, and a lot of imitating. We need to find ways to help these behaviors flourish. Share your comments on this article in our forum. (* You must be a member of XML.com to use this feature.) Comment on this Article - Genre Evolution 2003-09-18 08:45:48 Len Bullard [Reply] You are peering into the processes by which genres emerge from practice. The semioticians have a lot to say about that. You might want to check out Daniel Chandler's web pages on the topic. From informal to formal to banal to retro. len
http://www.xml.com/pub/a/2003/09/17/udell.html
crawl-001
refinedweb
1,379
62.88
Jokosher is an extensible bit of software. We knew that we’d never be able to think of all the stuff that people would want to do with an audio editor, so we’ve set up Jokosher so that people can write their own extensions to it. So if you want to do something particular, something very specific, something we don’t like, or something we just haven’t got around to yet, you can roll up your sleeves and do it yourself. This is the story of how I built the Freesound extension, which lets you easily browse the comprehensive library of sampleable Creative Commons licenced stuff at the Freesound system and then easily drop the samples you like into your Jokosher project. Jokosher extensions are, basically, tiny separate programs that can talk to Jokosher itself in certain specific ways. So, the Freesound extension isn’t really an extension so much as it’s a graphical browser for Freesound, which happens to know how to talk to Jokosher a bit. For the moment, let’s pretend it’s a separate program without any Jokosher bits at all. It works like this: - The program starts up - It checks to see if it knows what your Freesound username and password are - If it doesn’t, it pops up a dialog for you to enter them, and then saves them away for next time when you do type them in - It shows a window containing a search box and a Go button - When you type something to search for in the search box, it runs a query at Freesound using your username and password, and the thing you searched for, and gets back some XML describing matches - It puts a list of all the matches in the window, each with a description (that it gets in the XML), and a little waveform image (that it gets in the XML), and a play button (that it creates itself, but that plays the file directly from a URL that it gets in the XML) …and that’s it. The integration with Jokosher only comes in two parts; firstly, starting it up, and secondly, allowing you to add samples that it’s displaying to a Jokosher project. The first part’s pretty easy. Jokosher allows an extension to add itself to Jokosher’s Extensions menu. The way you do this is pretty simple. To write a Jokosher extension, you need to first create a Python file. In order for that Python file to *work* as a Jokosher extension, it needs to define certain things. Those things are: Three variables, EXTENSION_NAME, EXTENSION_DESCRIPTION, and EXTENSION_VERSION Two functions, startup() and shutdown() So, to make a Python file into an extension, you add something like this to the bottom of it: def startup(API): pass def shutdown(): pass EXTENSION_NAME = "Freesound search" EXTENSION_DESCRIPTION = "Searches the Freesound library of " +\ "freely licenceable and useable sound clips" EXTENSION_VERSION = "0.01" The variables are needed so that Jokosher knows what your extension is called, and can display a useful description to the person using it. When Jokosher starts up, and loads your extension, it calls your extension’s startup() function, and it passes it one object: above, we’ve set the startup() function to call that passed parameter “API”. The API object is how extensions can talk to Jokosher. Now, it’s important to note that you don’t make your extension do all its work in the startup() function. That gets called as Jokosher itself starts up. Instead, we want the Freesound extension to add a menu item to Jokosher’s Extensions menu, and then when the user clicks that menu item we start doing the work. The API object has a function called, amazingly, add_menu_item() to do exactly that. So, our startup function should actually be: def startup(API): menu_item = API.add_menu_item("Search Freesound", callback_function) What this will do is add a menu item “Search Freesound”, and when the user clicks on it it’ll call callback_function(), which is defined in our extension somewhere. Another minor thing is that anything you do in the startup() function has to be un-done in the shutdown() function. The reason for this is that shutdown() gets run if a user disables your extension. In our case, we need to remove that menu item, which is done using the menu item’s destroy() method. So, change startup() so that menu_item is a global variable (so it’s available to shutdown() as well), and change shutdown() so it removes it: def startup(API): global menu_item menu_item = API.add_menu_item("Search Freesound", mainwindow.menu_cb) def shutdown(): menu_item.destroy() That takes care of hooking the extension into Jokosher; the user can now start up and use anything that the extension can do. (Since this little discussion is about how to write Jokosher extensions, we won’t discuss how exactly the extension talks to Freesound. Just take it as read.) The part that’s left is: how do you get samples from the extension into your project itself? The way the user does it is to drag a sample from the extension window onto an instrument in Jokosher. Now, Jokosher understands dragged-and-dropped files; you can drag a music clip from Nautilus or from Firefox or from any other application straight onto a Jokosher instrument and it’ll work. So the Freesound extension just has to know how to allow users to drag things from it. Technically, this is called being a drag source. How to do this in Python is described in the PyGtk Tutorial — basically, you use code something like this: e.drag_source_set(gtk.gdk.BUTTON1_MASK, [('text/plain',0,88)],gtk.gdk.ACTION_COPY) e.connect("drag_data_get", self.returndata, sample) ... def returndata(self, widget, drag_context, selection_data, info, time, sample): selection_data.set (selection_data.target,8, sample.previewURL) where “e” in the first snippet is the widget displaying the sample, and “sample” is our actual Sample object which we’ve created from the FreeSound XML. In essence, the Freesound extension works like Firefox; when you drag a sample to a Jokosher instrument, Jokosher says “what’s this thing you’ve dragged to me?” and the extension says “it is a URL at Freesound”; Jokosher then thinks “aha, I know how to load URLs”, and loads the sample from that URL. This is a good example of how working with the Jokosher team can be important for extensions. Jokosher didn’t, when I started writing it, know how to load a sample from a URL (it only knew how to load one from your hard drive). To make it work, that needed to be added to Jokosher. (In this case, I added it myself, because I’m a Jokosher developer, but if I wasn’t then the team would have happily added it for me.) The Extension API, the way that extensions talk to Jokosher, deliberately isn’t complete; we’ve put the infrastructure in place but we haven’t tried to think up everything that everyone would want to do. Instead, if you’re writing an extension and you realise that you need Jokosher to be able to do something that it currently can’t for your extension to work, talk to us: we’ll add that extra part to Jokosher so that your extension works. This has been a relatively brief summary of writing a Jokosher extension, but hopefully it’ll give you some ideas. Now you can get extending!
http://www.kryogenix.org/days/2006/12/30/writing-a-jokosher-extension-a-rambling-essay
crawl-002
refinedweb
1,240
65.05
...one of the most highly regarded and expertly designed C++ library projects in the world. — Herb Sutter and Andrei Alexandrescu, C++ Coding Standards Configuring Boost.TR1 is no different to configuring any other part of Boost; in the majority of cases you shouldn't actually need to do anything at all. However, because Boost.TR1 will inject Boost components into namespace std::tr1 it is more than usually sensitive to an incorrect configuration. The intention is that Boost.Config will automatically define the configuration macros used by this library, so that if your standard library is set up to support TR1 (note that few are at present) then this will be detected and Boost.TR1 will use your standard library versions of these components rather than the Boost ones. If you would prefer to use the Boost versions of the TR1 conponents rather than your standard library, then either: include the Boost headers directly #include <boost/regex.hpp> boost::regex e("myregex"); //etc Or else don't enable TR1 in your standard library: since TR1 is not part of the current standard, there should be some option to disable it in your compiler or standard library. The configuration macros used by each TR1 component are documented in each library section (and all together in the Boost.Config documentation), but defining BOOST_HAS_TR1 will turn on native TR1 support for everything (if your standard library has it), which can act as a convenient shortcut.
http://www.boost.org/doc/libs/1_47_0/doc/html/boost_tr1/config.html
CC-MAIN-2017-47
refinedweb
242
51.99
Counting the number of periods since time-series events with Pandas Posted on Sat 15 July 2017 • 4 min read This is a cute trick I discovered the other day for quickly computing the time since an event on regularly spaced time series data (like monthly reporting), without looping over the data. import pandas as pd import numpy as np import matplotlib.pyplot as plt Say we have a list of workplace accidents at different factory locations for a company. We could have a dataframe that looks something like this: df.head() Now, our company has decided they want to know how many months each location has gone without an accident, and they want this historically. Maybe they are going to use it as input for a machine learning model that makes monthly predictions, or they might just be curious. Our plan of attack is as follows: - One-hot encode the severity - Resample the data so that it is regularly spaced - For each severity, make a counter that increases per period, resetting whenever there was an accident during that period Pandas makes step 1 very easy: df_onehot = pd.get_dummies(df, columns=['severity']) df_onehot.head() Next up, we resample. We want the data by location, so we will first group by location and then resample each group. Since we've one-hot encoded the data, the number of accidents in each period is just the sum of all the rows that fall into the period. Periods with no rows will be NaN, so we fill them with 0 since no accidents occurred in that period. df_periodic = df_onehot.groupby('location').resample('1M').sum().fillna(0) df_periodic 879 rows × 3 columns Finally, we want the counter that resets at each period where there was an accident. Let's first do it for one severity and location, and then we'll implement our work on the entire dataset. We'll choose Amsterdam and the lowest severity accidents. amsterdam_low = df_periodic.loc[('Amsterdam'),'severity_1'] amsterdam_low time 1980-01-31 1.0 1980-02-29 0.0 1980-03-31 0.0 1980-04-30 0.0 ... 2016-06-30 0.0 2016-07-31 0.0 2016-08-31 1.0 2016-09-30 1.0 Name: severity_1, dtype: float64 Okay, so we have a series with the number of accidents per month. Now here comes the trick. What we are going to do is set up two new series with the same index as the reports: one with a count that increases monotonically, and one that has the value of the count at every period where we want to reset. count = list(range(len(amsterdam_low))) count = pd.Series(count, index = amsterdam_low.index) count time 1980-01-31 0 1980-02-29 1 1980-03-31 2 1980-04-30 3 ... 2016-06-30 437 2016-07-31 438 2016-08-31 439 2016-09-30 440 dtype: int64 resets = count.where(amsterdam_low > 0) resets time 1980-01-31 0.0 1980-02-29 NaN 1980-03-31 NaN 1980-04-30 NaN ... 2016-06-30 NaN 2016-07-31 NaN 2016-08-31 439.0 2016-09-30 440.0 dtype: float64 Now we forward fill the values in resets using .fillna(method='pad'). That will give us a series of constant values, which step up by some amount at each index where there was an accident in amsterdam_low. This series will act as a baseline which we can subtract from count, so that at each accident the resulting series will reset to zero and then start counting up again. The first values before the first accident in the dataset will still be NaN, which is the desired behaviour because we don't know what these values should be. resets = resets.fillna(method='pad') resets time 1980-01-31 0.0 1980-02-29 0.0 1980-03-31 0.0 1980-04-30 0.0 ... 2016-06-30 435.0 2016-07-31 435.0 2016-08-31 439.0 2016-09-30 440.0 dtype: float64 since_accident = count - resets since_accident time 1980-01-31 0.0 1980-02-29 1.0 1980-03-31 2.0 1980-04-30 3.0 ... 2016-06-30 2.0 2016-07-31 3.0 2016-08-31 0.0 2016-09-30 0.0 dtype: float64 Plotting the three series makes it clearer what exactly the trick was. count.plot(figsize=(12,5)) resets.plot() since_accident.plot() plt.legend(['count','baseline','periods since accident'],loc='best') plt.ylabel('periods'); plt.xlabel('date'); plt.title('Periods since severity 1 accident in Amsterdam'); We've done it! What's nice about this trick is that we don't have to loop over all the accidents, so it scales well to larger data sets. To finish up, we do a groupby without aggregation to get the same information for all the accident types. def periods_since_accident(group): g = group.copy() count = list(range(len(g))) count = pd.Series(count, index = group.index) for col in g.columns.tolist(): resets = count.where(group[col] > 0).fillna(method='pad') g['periods_since_'+col] = count - resets return g df_report = df_periodic.groupby(level=0).apply(periods_since_accident) report_cols = ['periods_since_severity_1', 'periods_since_severity_2', 'periods_since_severity_3'] df_report.loc['Amsterdam'][report_cols].plot(figsize=(12,5)); plt.title('Periods since accident in Amsterdam'); plt.xlabel('date'); plt.ylabel('periods'); We can even add one final column with the number of periods since any accident, just by taking the minimum of the other three columns. df_report['periods_since_accident'] = df_report[report_cols].min(axis=1) df_report[['periods_since_accident']] 879 rows × 1 columns Happy incident tracking!
https://johnpaton.net/posts/periods-since-time-series-events/
CC-MAIN-2018-17
refinedweb
927
60.41
I have created a simple semaphore initialization example for demo purpose: #include <stdio.h> #include <sys/types.h> #include <sys/ipc.h> #include <sys/sem.h> #include <unistd.h> #include <stdlib.h> int init_semaphore(int semid, int semnum, int initval) { union semun pack; pack.val = initval; return semctl(semid,semnum,SETVAL,pack); } But I am getting the error: error: aggregate ‘init_semaphore(int, int, int)::semun pack’ has incomplete type and cannot be defined I am not able to understand why the compiler is throwing the error. The headers are also included properly. You have to explicitly declare union semun yourself. Per the POSIX standard for semctl():; Per the Linux man page:) */ };
http://m.dlxedu.com/m/askdetail/3/da2ec4b680614e83177dfd5e74fd3101.html
CC-MAIN-2018-30
refinedweb
111
54.39
I have a poem and I want the Python code to just print those words which are rhyming with each other. So far I am able to: wordpunct_tokenize() cmudict.entries() True False Here I found a way to find rhymes to a given word using) where imp is a word and level means how good the rhyme should be. So you could use this function and to check if 2 words rhyme you could just check if one is in other's set of allowed rhymes: def doTheyRhyme ( word1, word2 ): # first, we don't want to report 'glue' and 'unglue' as rhyming words # those kind of rhymes are LAME if word1.find ( word2 ) == len(word1) - len ( word2 ): return False if word2.find ( word1 ) == len ( word2 ) - len ( word1 ): return False return word1 in rhyme ( word2, 1 )
https://codedump.io/share/gbP17wcarZqK/1/find-rhyme-using-nltk-in-python
CC-MAIN-2016-44
refinedweb
135
91.95
- web.config - No cookie timeout in forms authentication - Windows and Macintosh developer's IRC channel - dotNet community IRC Channel. - web.config configurationsectionhandler GAC - Javascript in a .net page - set the focus to a control in an ASP.NET during the page load - Solving connections pool problem - Expanding a <div> - New records in Datagrid to be inserted into SQL table. URGENT! - How can sort the item in dropdownlist box? - Comparison? ASP.NET vs good old ASP with C++ COMs? - How can get the event source from postback action? - Server Error in '/MyApp' - Coding Standard for C#, ASP.net - Can't get stupid Authentication to work!! - DataTable into a new SQL table - Quest: Problem with Form Authentication w/ from MSDE SQL DB... - Can Calendar Control days be hyperlinks? - Download File - drop authentication - Sharing ASP session variables with ASP.Net using HTTPWebRequest - Frameset prevent Asp.net authentication? - Disabling auto-populating of form elements - n-tier tutorial - IIS 6.0 Logging - Partially Disable DataList ViewState - How close opening window? - GroupName in RadioButton Server Control - Setting events for button within asp:table - Debugging VB6 DLLs from ASP.NET - Event On Form From Control Event - Page Scraping of Authenticated pages with CredentialCache.DefaultCredentials - combo box control in ASP.NET - Large Project Examples - datetime - HttpContext.Current.Request.UserLanguages - Can I display my status for MSN or Windows Messenger on my website? - UrlEncode-ing System.Web.HttpCookie values - Databinding event of Dropdownlist does not fire - 'unable to start debugger' message - web user control - MustOverride Event - Error when add a new row using footer of datagrid that contain asterisk (') - edit stored procedures in vs.net - code for Paging - Process Starter - IIS 6 vs IIS 5 ASP.NET Performance Issues - Retrieve a distribution list from exchance 5.5 - Best way to keep a hidden value? - problem formatting URL in script - ASP.NET domain hosting with database access for a good price? - Required Field Validator - Can you install CDO seperatly? - Displaying TIFF with ASP.NET/C# - MemoryStream throws GDI+ Generic Error - Move jpeg from SQL Server into WebForm - Help: Copy Project Error - How to set permission to write to an xml file - installation - CDO, Exchange 5.5 and asp.net - conditional items in a datalist - Events generation and handling within a server controls. - Open WebForm to size from code behind - Page Timeouts - HELP? - The same web.config and global.asax files - Using the POST method with HttpWebRequest - Multiple TableCell.Conrols.Add(control) - How <BR> between them? - User conrol reading Request.Form - How to use C# class in VB project - Problem starting ASP.net - newgroups plugin to website - How to call Calendar DayRender event? - Roles - Shell function - Nested selecting data from table - IsPostBack always returns False - Response.Write and Response.Redirect - smartnavigation problem - How to set column width in a table object ? - Best Copy Protection Methods - Server Error in '/MyApp' Application. - Migration from Vs 2002 to 2203 - Rewrite URL with Session possible? - IIS Log - closing browser window from server code - Visible property of controls in data list - Culture - Path error when enabling sessionstate cookieless = true - Disabling ViewState on a Repeater Control - Coding standard - SmartNavigation + AutoPostBack - Deciding Data Access Strategy - Brainstorm with me on this - Closing out database connections. Timeouts - Help Please, How to Include .JS file into aspx page. - Export data to .PDF? - Simple DropDownList/ComboBox question - Excel from ASP - Codebehind unaware of select box members populated via javascript - Basic Javascript question - response.redirect & Cookies - Passing value into DataBinder.Eval(Container.DataItem, "Property") - User.Identity.Name - Changing Default Inherits. - conditional controls in a repeater - Unable to access controls within a Repeater - Binding hierarchical data - SQL Temporary Table Concurrency - request in namespace - FormsAuthenticationTicket - EditCommandColumn validation - How do you interupt a long-running database query? - Returning Identifier on a database Insert command - tree like structure - Reference material - DataGrid-driven SQL updates - Session Scope, over domains and secure connections - Spawn new browser from server process - Row spacing in DataList - Session Restart - How to avoid script database hacking? - HTML table on aspx web form - CustomValidator against two controls in EditItemTemplate on a DataGrid? - How do I change a listitem font color? - Form edits - Determine if a page is up or down. - Download file - Cookie vs HttpCookie - Accessing a URI programmatically - cannot access Session Variables - ASHX handler using codebehind ? - SQL database table ot SQL schema - Tracing ASP.NET - Request.Form("__EVENTTARGET") = "" ??? - how to do early-binding on repeater - Best way to load/store web site settings in database - Calling out-of-proc COM object from ASP.NET - problem on validating aspx pages - Forms Authentication - Compatibility issues in Web Based Development using ASP.NET, C# with Win-XP. - Set Table Width at runtime - Weird behavior on page - Is this safe? - datatable sort - form submit - datagrid-redraw - datagrid-focus - Valdiation Behaviour - IRC Channel for dotNet community - DropDownList in a Repeater - Help me with the Treeview Web Control! - Kill asp_wp process - send email internally - .Net scalabilty - How to dynamic build circuit diagram by ASP.NET? - asp.net user - Set focus on textbox - displaying number of results found ??? - adapter update problem Syntax error in INSERT INTO statement. - Convert PDF to Image (JPEG) - ASP.NET Programming - asp to asp.net - Losing Formatting in Browser - Problems using localhost - validate aspx page. - filterbar for grid? - validationSummary always show the HeaderText - asp.net with IE 5.0. - Transfer-encoding appears on my ASP.Net Web Page - Server Unavailable - how to create a flat user control (dropdownlist) ? - Programmatically create buttons from user control - Crystal reports on web - DropDownList question - checkboxlist - timers - Problem Uploading 10 meg files using built in.net file upload or aspupload - absolutely positioning user control - sql server does not exist - sending emails with hebrew characters. - one more question. WebRequest from asp page - string performance - help needed with popup window - asp.net label creation - Invalid ViewState on PocketPC 2002 Browser - Button Click Event Data Not available in time - Reload after going back - Use the DataList Control to Present and Edit Data PART 2 - Use the DataList Control to Present and Edit Data - Frames in ASP.Net - GC on a Session Variable - Hyperlinking and ASP.NEt Session - requiredFieldValidator doesn't work on netscape. - customValidator doesn't work - Authentication question - repeater + hierarhical data - schedule a time based event - dotnet controls seen much differently on IE and on netscape. - second post - Webforms in Outlook..? - SmartNavigation doesn't work on published page (Newbee q) - several applications in a virtual directory - Dynamic Controls - Server.Transfer() causes an error - redirect errors to a debugger - IE File Download Reanme Issue - Session Time out - some questions about asp.net - ASP Life Cycle Problem - retrieving data from mysql thro c# - Copy SQL 2000 Database - Installing ASP.NET 1.1 on IIS - How to deny linking images on my site from other sites? - Simple Page Template with Form Problem - Limit RSS Feed - developing a web application - datagrid-expressions - Create Oracle Trigger through C# Web service - HTTP headers and Response.Redirect - Referencing DLL in ASP.NET - Looping Through Controls In DataGrid - Quest: Error connecting to MSDE hosted DB... - Strange Javascript Cookie Behaviour - HTTP Posts - Validation Controls - Non-displayed values in DataGrid that stay accessible in EditCommand event? - Preventing page reload when using control (newbee q) - mixing Javascript and VBScript in ASP.NET? - XML file and asp.net application - How to test Webservice on the Web? - Running win scheduled job - White Space - QueryInterface for interface Excel._Application failed - Printing problem, Datagrid and IE - Simple question about cookie: - Is it possible to access another Page in code-behind - reg expression for date validation - Microsoft.Jet text file query - IE Browser and ASP.NET - Page Process Web User Control - ASP.NET/Windows Server 2003 Security Problem - ASP Life Cycle Problem - update decimal, integer in datagrid? - How To Create a JavaScript Confirm inside a Condition in ASP.NET - weird white space being introduced into <td> - A page can have only one server-side Form tag - Escape characters - Page titles - RegEx Validation Control Bug?? - Web User control and Marshelling b/w client browser and web server(database). - Variable Scope (ASP.Net) - SQL DTS from webserver .dll? - user control - Calling Java applet from a user control - Datat Base Option and Performance - showModalDialog and cookies - asp.net debugger question - XPathNavigator oddities - Retrieve asp.net session variable within HTA page? - Cost for using Server control - Wanted to try ASP.NET, but ... - Error message I don't understand - Excel on a ASP.NET page - HELP - Dynamically updating parameters - Response.Expires and Response.Cache - databinding question - One code behind page acting differently - GRRR! Pages submitting with Enter key, combined with .ASCX pages - help! - flowlayout vs. gridlayout - C# Web App with Lotus Notes datasource - Click-Event by dynamical Button - IE offline - System.web.mail truncating message body - strange behaviour ?? - Clear Session on page termination - datagrid columns-urgent - Windows username... - Internationalization Problem - Session_End not firing or limited? - alert using Javascript with asp.net - disable button on click - datagrid query - Problem with connection from ASP to SQL server - Creating a Explorer PlugIn - Executenonquery method - Impersonation with forms authentication ? - Different web.config's for development and production - SOS - SourceSafe - Using Microsofts File Upload ActiveX Control - Forms Authentication - Using MidB, InstrB, lenB in ASP.NET - Help: Slow sending emails via SmtpMail/MailMessage - Smartnavigation disables ActiveX Events (OWC) - AxWebBrowser control - Debugging error is ASP.Net - Smartnavigation problem - Global Variables - Simple User Control Question - Encrypting the source code - Same sessionID retuned to diff browsers in diff machines - need XML help with nested tables - ASP.Net win form responding to Enter key. - button to only execute javascript and not postback (using code behind) - debugging asp.net applications remotely - please help - Measure Performance Tuning Efforts - Looping through dataset outputting HTML radiobuttons - Quest: Adding a new Web Form to ASP.NET app - Always One checkbox checked - Web application creation error: ASP.NET app are locked down in IIS - flat dropdownlist - No matter solved it - MsWord With Asp.Net on theserver - Java/Vb Script syntax for manipulating word client side - Localhost Not allowing me to create aspNET apps - Using tab on asp.net application - Error page + application_error sub - Object reference not set to an instance of an object. HHHEEELLLPPP. - Corrupted ViewState (Yes, another issue concerning viewstate) - controls and performance - Recursive Directory Structure - System.IO - Dynamic control array & Command Event Handler - Add reference to Active X control - SmartNavigation Crashing IE with Content Advisor Enabled - Div's and character limit - Changing a form's action in code behind - forms authentication question - Asp.Net/Access 2002 text formatting problem - get_aspx_ver.aspx errors ? - Interaction between 2 webforms - Quest: ASP.NET access to Database without any SQL Server... - how to package web user controls? - Is this the correct Multi-threading approach - Trying To Query Remote Machine Using WMI & ASP.NET Fails With "Invalid Class" - what is .NET Remoting? - Page Formatting Issue - Please Assist - Parser Error - Help with DataTable update (BeginLoadData method) - Passing arrays from Classic ASP to .NET components - How do you do this transparently? - datagrid - PHP Serialized array - Shopping Cart - database vs session - Load Balancing a .NET WebFarm - Remove row with null value from DataSet?? - DropDownList SelectedIndex value is not working for the last item. - Product to automatically convert VB.Net projects to C# - Beta Testers wanted - Opening MS Word from a VB.NET ASP Page Access Denied - ASP.NET and Java - ASP.NET is not authorized to access - Automating Word - Regular Expression Validator seems to be wrong! - Visual inheritance... - Delegate overloading - Documentation - Timed redirect of web page - Formating a Date as it is entered? - double click and datagrid - Event handler for multiple command buttons - Invalid Cast Exception on Context.ApplicationInstance - message filter indicated that the application is busy - VS.net breakpoints - Drop down list inside a repeater - How to add attachment file to a form - Textbox cursor behavior is bad - File Upload - SSL Response.WriteBinary() - Importing an existing Application - Placeholder and positioning - Appending a DataSet - how to open a browser window without the close-maxmize-minimize bar? - Radio Button List Query - How to retrieve image size? - Am I missing something here ?? - server is not running asp.net version 1.1 - server is not running asp.net version 1.1 - Forms Authentication Weirdness on Local Network - Forms Authentication Weirdness on Local Network - Session variables in shared class - Session variables in shared class - 2 questions concerning installation - 2 questions concerning installation - Server.Transfer in User Control only fired on second click on button - Server.Transfer in User Control only fired on second click on button - Printing from an asp.net page - Printing from an asp.net page - datagrid query-binding problem - datagrid query-binding problem - Adding attributes to page button of datagrid - Adding attributes to page button of datagrid - Trouble in Repeaters - Trouble in Repeaters - HttpContext.RewritePath - HttpContext.RewritePath - I need crypt and decrypt basic sample - I need crypt and decrypt basic sample - How to deploy asp.net with vb6 dll to other machines - How to deploy asp.net with vb6 dll to other machines - Impersonation when calling com dll - Impersonation when calling com dll - ViewState - ViewState - clone a datagrid - clone a datagrid - ByVal and ByRef - ByVal and ByRef - Binding Datagrids To a collection - Binding Datagrids To a collection - client side browser information - client side browser information - Page Reload - Page Reload - client browser information - client browser information - Open file dialog in Web App ( C# ) - Open file dialog in Web App ( C# ) - Problem with Multiple Validation Controls and one Form Tag. - Problem with Multiple Validation Controls and one Form Tag. - HTTP Upload Question - HTTP Upload Question - How to communicate with Java programs on database server? - How to communicate with Java programs on database server? - How do i change background color to transparent - Lotus Notes - Lotus Notes - How do i change background color to transparent - Forms Authentication Cookies Never Expire - Forms Authentication Cookies Never Expire - Issue with writing to Excel - Issue with writing to Excel - IIS issue with asp .net 1.1 - IIS issue with asp .net 1.1 - SmtpMail.Send Error - Trouble with OleDb data extract - SmtpMail.Send Error - Whidbey - No files visible when 'Opening project from web' - Trouble with OleDb data extract - Specifying all files except Login.aspx and Join.aspx are restricted to authenticated users - Whidbey - Create help files/Online help in .NET - System.Web.HttpCompileException - Compilation Problems - No files visible when 'Opening project from web' - Specifying all files except Login.aspx and Join.aspx are restricted to authenticated users - HTTP 404 Error for ASP.NET pages - Create help files/Online help in .NET - System.Web.HttpCompileException - Compilation Problems - HTTP 404 Error for ASP.NET pages - Datagrid Item Delete Confirmation not working - learning asp.net - proper method of development and then deploy - Datagrid Item Delete Confirmation not working - How to break line in Label control - learning asp.net - proper method of development and then deploy - How to break line in Label control - Problem with XML WebControl, and css stylesheet link - Console.Write - Error in debugging after installing Frontpage 2000 - Paragraph Name - datagrid grouping? - calling popup window from page_load - ASP.NET Permissions (I think) - Using Process class to run client executable? - Is server.transfer the same as response.redirect?-eom - Repeater- Accessing the parent row values from a child. - cookies wont create - Forms Authentication - "Shared" members as globals? - Connecting VS.NET to existing ASP.NET web - Using panel control in Web forms - calling popup edit window from datagrid - Asynchronous Web Method - Request Timeout - Excel as a datasource revisited... - Tool Tip Class - 3rd Party Web Reporting - Session over from nonsecure to secure - httphandler and Output caching - Client validation not working with XML data islands in tables - Tips and Tricks: Page Templates - how to debug javascript in vb.net - PowerPoint 10.0 Object library access and Office instalation. - "Access to the path "...PortalCfg.xml" is denied - A problem in storing HTML in database or a problem in finding the right reporting solution? - Combining Two Data Sources for one Data List - Changing the Icon of a Window - enumerate Users in Activedirectory group - Best Method to return data from a DAL - IsNull Test ??? - Help?? The page cannot be refreshed without resending ... - Getting "not enough disk space error" when there is plenty of disk space - Client-side Word Automation from asp.net - Creating an ASP.Net Web App to Win2003 - Webform controls events - EOF .NET equivalent? - server needs to know when user closes browser window - SourceSafe.... - accessing files on intranet - Postback Problem - Has Anyone - debugging - AttachUrl disappeared in .NET ? - Using Shared Functions - Repeater- Accessing the parent row values from a child. - VS.Net And FrontPage - Connecting to SQL Server DB - Error-- Specified cast is not valid - save page scroll - Old (pooled) ASP pages makes dllhost.exe to hang after inst. .Net 1.1 - How to have One RequiredFieldValidator to Validate Two TextBox - Principal/Identity questions - Changing background-color of a ListItem - Change Page Event? - howto make app1 sessions not expire while app2 is being use... - Change control property in code - error #BC30002 - Validation for user controls - Checkbox.CheckedChanged has a mind of its own - HowTo Bind a Imagebutton as default - Open new window and get values from previous window - Datagrid Edit events all screwy... - Hide ASP page - Javascript in webuser control - Can't make two request at once, one blocks all..... - Page Closure - DropDownList (bind) - How to use the Control - how do I get the value of httpruntime.maxRequestLength through code? - grid query - Check if uploaded jpg uses CMYK or RGB pallete?? - cell double click - How? Open image, add text, then "stream" into <img src="...">? - permissions when authoring in VS - deploy msi without web.config or with more IIS settings? - Drawing - Control of the Page Cache - Reflection in ASP.net - Error While Trying to Run ASP.Net Project - Set the selected item of a listbox or radiolist - Tab within ListBox item - Triger event in winform from webform - ASP.NET, threading and singleton - How do I pass a NavigateURL result to another web form - Editable Dropdown List - REQ: Help making website - .NET/JAVA Interoperability - how to add access database to asp.net application on server - VS 2003 can only connect to local host when connected to intenet - can you bind to backcolor/forecolor using VStudio Webapp? - Footer CustomValidator - article on ViewState
https://bytes.com/sitemap/f-329-p-165.html
CC-MAIN-2020-45
refinedweb
2,984
50.63
An Introduction To Python Objects Using IPython (Part-2) [ID:009] (2/3) in series: An Introduction to Python Objects, using IPython video tutorial by Jerol Harrington, added 03/07 Our authors tell us that feedback from you is a big motivator. Please take a few moments to let them know what you think of their work. In this ShowMeDo we show you how to create your own objects and methods, how to get a handle on self and even venture into ... namespaces Got any questions? Get answers in the ShowMeDo Learners Google Group. Video statistics: - Video's rank shown in the most popular listing - Video plays: 2534 this video. This one was helpful too, thanks. But I was left wondering what happens if you instantiate a class without attaching a name (i.e. label) to the object? e.g. class aaa: pass >>>aaa() I'm sure the class is instatiated, but I'm not sure what happens after that. Am I able to access the instantiated object with some clever trick by listing all active objects, cross referencing the objects with bound names, and attaching labels to the nameless objects? Or, are they as-good-as-gone, just left for the GC to come by and clean up? I'm fairly confident it's the latter, but I wonder and I'm not sure how to check as, after all, I don't have an object to id() :) Hi Patrick, objx.py is just a script that contains objects such as classes. I use it with the IPython "run" command to load the script in order to demonstrate different qualities of Python's classes. I use IPython all the time with tiny code fragments to test ideas or to clarify points of confusion. If you are learning the language, I suggest running a Python-aware editor, IPython, and the script in the same directory. Then you can just toggle between IPython and the editor. This is what I do in the tutorial, and what I do for most of my personal programming. Thank you for your comments. Jerol Noticed the videos were made back in 2006. I just started with Py coming out of Java and was confused but the videos helped 99%. Rating = Excellent. The 1% was the loading of "run objx.py". Is objx.py a class with inner classes in it or just a holder? Very good and easy to understand, but you need to go a bit faster. I learned you can call a method from the class rather than an instance, thanks. I was practicing a bit about inheritance with your exercises and it can be even more things to find out and more questions coming over, but thanks to your introduction (and some mistakes in my typing...) I am understanding much more the way that classes and objects are defined. Evaristo good Best discussion of objects I've ever seen!!!!! Very good. So simply put. great way to illustrate inheritance in python cool! that was some good info for me. Now I can impress my friend with a solution to a coding dilemma he is currently facing. Sorry Jerol. I found this presentation confusing. I'd like to have had a clear distinction between classes and instances (were the classes all 3 letters and the instances all 1 letter???). Thanks! Very good tutorial, clearly explained. Very, very good content. This was great at understanding the way data flows which I needed before getting started. Your series has filled in more gaps for me then anything yet. Jerol has the nack for translating difficult concepts into easy, yet insightful, explanations. The part about "self" being passed as an argument was a bit hazy. Otherwise, this was one of the best explanation about inheritance that I have seen. Thank you. Useful. Many thanks. Great video, very easy to follow. My only complaint is that the mouse clicking and keyboard typing is very loud, and a little distracting as it is not really in sync to the image. Excellent explanation. Very basic in scope, but handles the necessities of understanding class objects very well I would appreciate more detail and more depth with perhaps a level more complexity. Thank you Jerol! I have been learning python for a couple of weeks now, and this is by far the most helpful. I finally understand inheritance!! Good tutorial, I didn't really understand inheritance, this video made it more clear. Thanks. Hope you do more. Hi This was sooo goood. You made it simple and easy to understand. trying to understand this stuff by reverse engineering sample codes can be frustrating/ Hi This was sooo goood. You made it simple and easy to understand. trying to understand this stuff by reverse engineering sample codes can be frustrating/ Hi DiggDog3000, Thanks for the comment. I didn't mean to imply that passing an instance directly to a class object was a normal procedure. Rather, I wanted to prepare the viewer for the next screencast, which illustrates __init__. If we use __init__ to initialize the subclass, we must also pay attention to the superclass, if applicable. The superclass's __init__ method is not called automatically. We can do this by passing an instance of the subclass directly to the superclass's __init__ . All that I was trying to do was show how we could pass an instance of a class or subclass directly to a class (unbound) method. Note that we go up the inheritance chain. It doesn't work the other way. (xxx.show(y) = OK yyy.show(x) = ERROR). Note that we get xxx.show's method "This is xxx", not yyy.show "This is yyy". This is why your test case failed. Keep experimenting with the shell. That's the best way to learn. Also, maybe the next screencast in the series will clear things up a bit. Good Luck! Jerol Thanks Jerol! I got a little lost at the end. I was wondering if you someone could explain the significance of zzz.show(z). I see we have a class, using its method that has been passed an instance of itself. Are we saying that self must be any instance of the class? - I tried passing an instance of xxx to zzz.show(), but that didn't work. And when I made an instance of zzz called w -ie [ w = zzz() ], this worked (ie. zzz.show (w). I have been wanting to learn more about objects...this is a good start. xxx, yyy, zzz made for a very clear example of inheritance of class attributes through successive generations. Thanks! xxx, yyy, zzz, shows inheritance of classes through successive generations very clearly. Thanks! Review of An Introduction To Python Objects Using IPython (Part-2) Naming all the samples things like xxx, yyy, zzz, and aaa made things confusing. Especially when you name a class xxx and then set x to be an instance of xxx. Please use names that mean something in the future. Just as a variable should be named num_rows instead of n, class names even -- especially -- in samples, should not be named xxx. Again, the off-the-cuff presentation style made it harder to follow. If a video is going to be available for years as a way to teach new people, it should have some pre-show work done before hitting the "record" button. Shawn very straightforward, very intelligible. nice work, thanks for the efforts Dear Jerol, thank you very much for your great videos about python objects using ipython. It was interesting for me and I had a lot of fun visiting your ShowMeDo's. English is not my native language but I can you understand very well. I would be happy if you continue with your nice tour trough python. With kind regards Swen
http://showmedo.com/videotutorials/video?name=IntroductionToPythonObjectsUsingIPython2_JerolH&allcomments=1
CC-MAIN-2014-41
refinedweb
1,303
75
Get focus on a popup In order to have a QPushButton with a down arrow I have used QPushButton.setMenu to add a dummy menu to the button. Then I show a popup (QWidget based custom widget) on the aboutToShow() signal of the dummy menu. This works except the popup does not get the focus and must be clicked twice, once for focus, once again to activate button on the popup. The focus appears to remain with the QPushButton as its appearance changes when I click in my popup widget. Is there a way that I can force the popup to get the focus when it is shown? I have tried setFocus() but that doesn't work. If I remove the menu from my button and open the popup from the clicked() signal then it works properly, but I don't want to do that unless I can somehow also have the same down arrow on the button that you get when you add a menu. That's weird. It should get focus when the widget is shown. Can you try with just a simple QLabel or QPushButton or something instead of your custom widget. So when you would normally show your custom widget show a QPushButton instead as a test and see if it gets focus. That will narrow down the problem on whether it's your custom widget or how it is being displayed. I tried what you suggested using a QPushButton and it turns out that the behaviour I am seeing is caused by setting the Qt::Popup flag. If I use Qt::FramelessWindowHint instead then the popup gets the focus on opening but doesn't close when another widget gets the focus. This is the same whether I use my custom widget or QPushButton. Also this is on OSX 10.8 with PyQt and Qt 5.2.1, in case that matters. Can I see the code where you open the popup? I've never used PyQt but it shouldn't really matter for what we're talking about here. And 5.2.1 is new enough for bug fixes and such. I have used popups with Qt 5.2.1 and OSX 10.9 and haven't seen this behavior. Here is a simple example that shows the problem: @ import sys from PyQt5 import QtWidgets, QtGui from PyQt5.QtCore import Qt app = QtWidgets.QApplication(sys.argv) title = "Focus test" w = QtWidgets.QMainWindow() w.setWindowTitle(title) def onAboutToShow(): p = QtWidgets.QPushButton(w) p.setText("A button") p.setWindowFlags(Qt.Popup) p.move(QtGui.QCursor.pos()) p.show() b = QtWidgets.QPushButton() m = QtWidgets.QMenu() b.setMenu(m) m.aboutToShow.connect(onAboutToShow) w.setCentralWidget(b) w.show() app.exec_() @ I'll write some test code tomorrow and see if I can find out why this is happening. Don't have time to do it tonight. :)
https://forum.qt.io/topic/44077/get-focus-on-a-popup
CC-MAIN-2019-18
refinedweb
477
75.3
The classical introductory exercise. Just say "Hello, World!". "Hello, World!" is the traditional first program for beginning programming in a new language or environment. The objectives are simple: If everything goes well, you will be ready to fetch your first real exercise. return. For a discussion see here. Or as a quote from that discussion: Don't use return, it makes Scala cry.. This is an exercise to introduce users to using Exercism It's possible to submit an incomplete solution so you can see how others have completed the exercise. import org.scalatest.{Matchers, FunSuite} /** @version 1.1.0 */ class HelloWorldTest extends FunSuite with Matchers { test("Say Hi!") { HelloWorld.hello() should be ("Hello, World!") } } /** * Created by adam on 8/22/16. */ object HelloWorld { def hello(name: String = "World"): String = s"Hello,
https://exercism.io/tracks/scala/exercises/hello-world/solutions/22185250a4224e2897aff2d12e65cfa2
CC-MAIN-2020-24
refinedweb
132
52.87
Python Programming, news on the Voidspace Python Projects and all things techie. Testing Tests At Resolver we have some tests for our test framework. For example: FunctionalTestTest is a test that tests tests. In the test setup it sets-up the test that it is going to test. My sincere apologies to anyone who actually read this... Like this post? Digg it or Del.icio.us it. Looking for a great tech job? Visit the Hidden Network Jobs Board. Posted by Fuzzyman on 2007-02-08 16:45:41 | | Categories: Work, Python, Hacking, Fun Further Polish Adventures: Academic Computer Science Festival Thanks to Jan Szumiec (whom I mentioned in the previous entry) is a student at the Technical University of Cracow. He is organising this year's Academic Computer Science Festival [1] from the 8th to the 10th of March. I've been asked to speak there on IronPython. I was his second choice to speak, but I'm not too insulted as I'm really looking forward to the opportunity to see Cracow which is said to be a very beautiful city. As the only Polish folk I know both program natively in Ruby, it's a little daunting. I'll be presenting to people who natively both speak and program in a different language. Like this post? Digg it or Del.icio.us it. Looking for a great tech job? Visit the Hidden Network Jobs Board. Posted by Fuzzyman on 2007-02-08 14:45:57 | | Categories: Writing, IronPython, Python Interesting Things Here are three interesting things I've stumbled across recently. They've all been linked to elsewhere, so my apologies. Compile C# From IronPython Some example code from a gentleman allegedly called drifter46and2. It shows how to compile C# code from IronPython and get a reference to the assembly. I haven't played with this yet, so I don't know how slow it is. It also appears to save the generated assemblies to disk, keeping them in memory would be more interesting to me. The reason this looks interesting to me is that there are two limitations with IronPython that this technique potentially overcomes [1]. IronPython is a Python compiler - this means it compiles Python source code to assemblies [2]. However the .NET garbage collector doesn't collect class definitions, so IronPython reuses a single class for all user defined classes. This means you can't create an assembly that can be consumed from C#. Instead you have to create a stub C# class yourself, and hook up the methods. By generating and compiling C# dynamically you could automate this. The second limitation this causes is that you can't use .NET attributes, and again have to use stub C# classes. IronPython now Included in Mono 1.2.3 As well as several fixes to the Windows Forms implementation, this version of Mono now includes the IronPython Community Edition. W00t! I have two Polish friends who are Ruby developers. One of them (Jan rather than Andrzej) was of the opinion that Ruby was more object oriented than Python because you are able tack methods onto the built-in classes. Of course the 'Python Philosophy' (tm) is that allowing this is a crazy recipe for disaster. Generally the opinion in the Ruby world is that disaster doesn't happen very often and it is a technique that can be very useful. Well, disaster can happen, and the article on method_missing is a tale of how one Ruby programmer came at least temporarily unstuck. I like Simon Willison's comment: My least favourite thing about Ruby is the cultural tendency towards introducing weird new bugs in other people’s code. Do I sound smug ? Like this post? Digg it or Del.icio.us it. Looking for a great tech job? Visit the Hidden Network Jobs Board. Posted by Fuzzyman on 2007-02-08 14:28:38 | | Categories: Python, IronPython, General Programming, Hacking Wing at Resolver We've finally browbeaten Giles sufficiently and he's bought three of us at Resolver shiny new Wing IDE licenses. One of us is using Eclipse with the free PyDev extensions and two of the developers here still use Textpad. Eclipse has some really nice features, but also some annoying ones, and the ease with which you can script Wing was a big win for me. A gentleman called Brandon Corfman has published an interesting few blog entries on scripting the Komodo IDE, mainly using Bicycle Repair Man which is a refactoring tool for Python. Refactoring is something that Eclipse is particularly good at (coming from the Java world) and the couple of times I've used its Extract Method it behaved very intelligently. There are two of Brandon's Komodo scripts that I'd like to port (or see ported) to Wing: allows you to rename all occurrences of a variable or method name wherever it appears in your code. allows you to highlight a section of existing Python code in Komodo and refactor it into its own separate method. Porting both should be well possible. We also have a custom PyLint setup that checks our whole codebase, we run this before checking in (it checks for unused imports and unused variables amongst other things). Being able to run this on a single file from the IDE will be very handy. Like this post? Digg it or Del.icio.us it. Looking for a great tech job? Visit the Hidden Network Jobs Board. Posted by Fuzzyman on 2007-02-08 14:04:39 | | Categories: Python, Tools, Work Programming Language Metrics Ok, so programming language metrics aren't worth the paper they're printed on, but they're still fun. I've just stumbled across a metric I'd not seen before: It charts various languages by the number of sourceforge projects, and goes from 2001 to the end of 2006 when sourceforge stopped providing this information. The results (unsurprisingly) show Java, C++ and C way out front, followed by PHP (a rising but flattening curve) and a gracefully declining Perl. Python comes next, at 4.88%, and is a full point and a half above C# and then Javascript. As this charts Open Source projects it is most closely a metric of language popularity rather than 'industrial' usage. It reflects what programmers like to do in their spare time. The fact that Python leads C# probably also reflects the fact that Python is much more prominent in the open source world. There is a surprise, Ruby is shown as a relatively flat 0.44%. This is possibly because Ruby developers tend to use the popular RubyForge rather than sourceforge (which seems to be declining in importance itself). The results are not dissimilar from the TIOBE Programming Community Index. By gathering data from search engines it is also a metric of language popularity. It too shows Python ahead of C#, but puts Visual basic much higher than the sourceforge figures. TIOBE shows Ruby having grown significantly since mid 2006, and catching up on Python, but there are signs that the growth is slowing. The growth curve for Ruby on TIOBE has turned into a 'knee', the curve is flattening off. There is a third metric. The Jobtrends by Indeed.com. I've only compared a few of the languages and it only shows comparisons going back around two years, but it tells a different story. By gathering data from job postings it is much more a metric of industrial usage than popularity. It shows C# as rising, now reaching around the same point as the declining Perl, and well ahead of Python. It shows a gentle rise for Python, which is well above Ruby. If you're obsessed with metrics [1] like this, then you might want to look at google search trends. This shows a similar picture to TIOBE (although it allows a maximum of five terms), except it puts C# ahead of Python. Interesting. Like this post? Digg it or Del.icio.us it. Looking for a great tech job? Visit the Hidden Network Jobs Board. Posted by Fuzzyman on 2007-02-06 13:05:22 | | Categories: Python, General Programming Akismet 0.1.5 Python Akismet 0.1.5 is now available. Fixed a typo/bug in submit_ham. Thanks to Ian Ozsvald for pointing this out. Python Akismet is a Python interface to the Akismet, spam blocking web-service. It is aimed at trapping spam comments. The Python interface comes with an example CGI. Like this post? Digg it or Del.icio.us it. Looking for a great tech job? Visit the Hidden Network Jobs Board. Posted by Fuzzyman on 2007-02-05 21:00:50 | | Categories: CGI, Python, Projects ConfigObj 4.4.0 and Validate 0.2.3 Updated versions of both ConfigObj and Validate are now available. ConfigObj is a Python module for the simple reading and writing of config files. It has many features, whilst remaining easy to use. With the assistance of Validate it can validate a config file against a specification, and convert members to the expected type. Eggs for Python 2.4 & 2.5 are available from the cheeseshop. Thanks to Nicola Larosa who implemented most of the fixes in this release. What is New in ConfigObj 4.4.0? - Made the import of compiler conditional so that ConfigObj can be used with IronPython. - Fix for Python 2.5 compatibility. - String interpolation will now check the current section before checking DEFAULT sections. Based on a patch by Robin Munn. - Added Template-style interpolation, with tests, based on a patch by Robin Munn. - Allowed arbitrary indentation in the indent_type parameter. - Fixed Sourceforge bug #1523975 by adding the missing self What is New in Validate 0.2.3? Fixed validate doc to talk of boolean instead of bool; changed the is_bool function to is_boolean (Sourceforge bug #1531525). Like this post? Digg it or Del.icio.us it. Looking for a great tech job? Visit the Hidden Network Jobs Board. Posted by Fuzzyman on 2007-02-04 17:46:35 | | Categories: Python, Projects Firedrop2: Blog Statistics Davy Mitchell has made a nice addition to the Firedrop2 blogging tool I use. It generates a page of statistics about your blog entries, number of words per post, number of posts per category and the like. Formatting aside, it's a nifty little addition. Unsurprisingly 455 of my 606 blog entries are in the Python category. The code hasn't yet been checked in, but you can download the changes as a zip file: BlogStats.zip Like this post? Digg it or Del.icio.us it. Looking for a great tech job? Visit the Hidden Network Jobs Board. Posted by Fuzzyman on 2007-02-03 23:56:27 | | Categories: Python, Projects, Blog on Blogging PyCamp UK ? Jeff Rush has just announced that the Dallas and Houston Python User Groups are looking to arrange a regional Python 'unconference': PyCamp. This sounds like a great idea, and Michael Sparks has picked up on the idea for the UK. Anyone up for a PyCamp UK ? I certainly am. Michael is suggesting a venue in Manchester and looking for volunteers to see if anyone is interested... Like this post? Digg it or Del.icio.us it. Looking for a great tech job? Visit the Hidden Network Jobs Board. Posted by Fuzzyman on 2007-02-03 19:44:38 | | Book Progress I've just finished the first draft of chapter two, the Python tutorial. I've wanted to write a Python tutorial for a long time, but that thirst is pretty much quenched right now. It will still need a fair amount of polishing I guess, but the basic content is pretty good. I'd love to share it, and although I'm enjoying having my work assessed by a professional editor I would prefer to be able to make it available to everyone. Surprisingly (to me) the hardest parts are the introductions and the summaries. At some point in the long distant future I'll write another Python tutorial which I'll publish on my site. As you might expect, writing is pretty much taking up all my spare time at the moment. It will be a while after the book is out before I do much more writing, first I'll be taking a good break... Speaking of breaks, Andrzej and I now have our flights and hotel booked for PyCon. I'm really looking forward to it. We're also working on the presentation and a simple example app. to go along with it. Before we leave for the US we'll put the talk notes and the app online. Like this post? Digg it or Del.icio.us it. Looking for a great tech job? Visit the Hidden Network Jobs Board. Posted by Fuzzyman on 2007-02-03 19:26:48 | | Categories: Writing, Python, IronPython The Event Pattern Since I started work at Resolver Systems nearly a year ago, Resolver the application has grown quite considerably. It is now just under twenty thousand lines of production Python code, plus a few hundred lines of C# (some of which is autogenerated by the Visual Studio designer [1]). There are also around seventy thousand lines of test code, this is our test framework plus unit and functional tests. The amazing thing is that we have created a highly functional application in only twenty thousand lines of code. Resolver is structured using a fairly straightforward Model-View-Controller structure. The view is our main form class, we have several controllers managing the interaction of the model and the view. The model are our various classes that represent the data-set being edited. We use various other patterns to good effect in Resolver: the observer pattern and the command pattern amongst others. Some of our classes, like the controllers and the commands are data-set observers. When the data-set is changed (like a new one is loaded) they need to be informed. As the main form is responsible for loading new data-sets it sets the new ones onto all the observers. The whole point of the MVC structure (and this post has a point too, I promise) is to decouple the model classes from the view and controller classes. The model doesn't need to know anything about these classes. Changes to the view and controllers don't (in theory anyway) affect the model at all, and vice versa. Note Thanks to Jonathan for a correction. This should read "the View-Controller pairs are allowed to know about the model, though not vice-versa. This means changes to the model can impact the View-Controller pair.". This is why you need to define a clear public API on your model, your View-Controllers need to know how to access the model in order to present a view on the data. The difficulty is that the controllers often need to be informed when the data-set changes, but we still want to maintain the separation. That means the data-set shouldn't need to have a reference to the controllers. At Resolver we use a solution I haven't seen before. It may in fact be a part of the classic observer pattern, but it is certainly a natural extension of it. IronPython has a nice way of interacting with .NET events. You add and remove event handlers to events using add and remove in place syntax. def onClick(sender, event): print 'You clicked me' # Add the click handler to the # Click event button.Click += onClick # Remove it again later button.Click -= onClick At Resolver we have created our own event hook class which is used in the same way. I actually introduced this class in my Background Worker Threads article on IronPython. I'd like to briefly show how it can be used to communicate between model and controllers, in concert with the observer pattern. def __init__(self): self.__handlers = [] def __iadd__(self, handler): self.handlers.append(handler) return self def __isub__(self, handler): self.handlers.remove(handler) return self def fire(self, *args, **keywargs): for handler in self.__handlers: handler(*args, **keywargs) Small and cute don't you think ? Lets imagine we have a DataSet class that needs to inform (some of) its observers when it has changed, perhaps to show a modified flag on the GUI to give a trivial example. We have a MainForm class which manages the observers. When a new data-set is loaded it sets the 'dataset' property on all the observers. def __init__(self): self._controller = Controller() self.observers = [self._controller] def loadDataset(self): dataset = self.magicallyGetNewDataset() for observer in self.observers: observer.dataset = dataset We also have a Controller class (initialised in MainForm above). In its handler for the 'dataset' property it registers itself to listen to the 'modified' event on the new dataset. If necessary it unregisters itself from any previous events it may have been listening to. def __init__(self): self.__dataset = None def setDataSet(self, dataset): if self.__dataset is not None: # unregister the handler # on the previous dataset self.__dataset.modified -= self.onModified self.__dataset = dataset # register the handler # on the new dataset self.__dataset.modified += self.onModified dataset = property(lambda self: self.__dataset, setDataSet) def onModified(self): # do something in response # to changes in the dataset When the MainForm sets the dataset on the controller, setDataSet is called. And so the final piece of the jigsaw, the DataSet class. def __init__(self): self.modified = EventHook() def doSomething(self): # code which does something if somethingHasChanged: self.modified.fire() In the initialiser it creates an event hook; the 'modified' attribute. When the dataset is set on the observers, any that are interested in the modified flag can register with the modified event. When the data-set is modified, it calls self.modified.fire() and all interested parties are notified. The observers can then use the public API of the dataset (which they hold a reference to) to work out how they need to respond. The dataset itself doesn't need to hold a reference to any of the observers. The advantage of this kind of approach (and I'm sure there are many alternatives) is that you can then re-use your core classes in another situation, hopefully without having to rewrite a line. It does require careful managing of dependencies, and a sensible public API. In the example above you could take DataSet and use it in a tool that doesn't have a front-end at all (like a data converter) or embed it in a web application. Like this post? Digg it or Del.icio.us it. Looking for a great tech job? Visit the Hidden Network Jobs Board. Posted by Fuzzyman on 2007-02-03 02:18:49 | | Categories: Python, IronPython, Work, General Programming Archives This work is licensed under a Creative Commons Attribution-Share Alike 2.0 License. Counter...
http://www.voidspace.org.uk/python/weblog/arch_d7_2007_02_03.shtml
crawl-002
refinedweb
3,152
66.54
PyTorch Stack: Turn A List Of PyTorch Tensors Into One Tensor PyTorch Stack - Use the PyTorch Stack operation (torch.stack) to turn a list of PyTorch Tensors into one tensor < > Code: Transcript: This video will show you how to use the PyTorch stack operation to turn a list of PyTorch tensors into one tensor. First, we import PyTorch. import torch Then we print the PyTorch version we are using. print(torch.__version__) We are using PyTorch 0.4.0. Let’s now create three tensors manually that we’ll later combine into a Python list. We create our first PyTorch tensor using torch.tensor. tensor_one = torch.tensor([[1,2,3],[4,5,6]]) Here, we can see the data structure. We assign it to the Python variable tensor_one. Let’s print the tensor_one Python variable to see what we have. print(tensor_one) We see that we have our PyTorch tensor, and we see that our data is in there. Next, we create our second PyTorch tensor, again using the torch.tensor operation. tensor_two = torch.tensor([[7,8,9],[10,11,12]]) Then we create our third tensor and assign it to the Python variable tensor_tre. tensor_tre = torch.tensor([[13,14,15],[16,17,18]]) Again, we use torch.tensor and pass in more data. So now that we have our three example tensors initialized, let’s put them in a Python list. So we’re going to use the square bracket construction. tensor_list = [tensor_one, tensor_two, tensor_tre] We put tensor_one, tensor_two, tensor_tre, and we assign this list to the Python variable tensor_list. We can then print this tensor list Python variable to see what we have. print(tensor_list) We see that we have a tensor here, then a comma, then a tensor here, then a comma, and then a tensor there. So we have a list of three tensors. Let’s now turn this list of tensors into one tensor by using the PyTorch stack operation. stacked_tensor = torch.stack(tensor_list) So we see torch.stack, and then we pass in our Python list that contains three tensors. Then the result of this will be assigned to the Python variable stacked_tensor. Note that the default setting in PyTorch stack is to insert a new dimension as the first dimension. Our initial three tensors were all of shape 2x3. We can see this by looking at our tensor_one example that we constructed up here and saying dot shape. tensor_one.shape When we do that, we see that the torch size is 2x3. So two rows, three columns. So the default of torch.stack is that it’s going to insert a new dimension in front of the 2 here, so we’re going to end up with a 3x2x3 tensor. The reason it’s 3 is because we have three tensors in this list we are converting to one tensor. We can check that by using our stacked_tensor Python variable and checking the shape of it. stacked_tensor.shape We see that we get 3x2x3 because there are now three tensors of size 2x3 stacked up on top of each other. We can then print the stacked_tensor Python variable to see what we have. print(stacked_tensor) So print(stacked_tensor) and we see that it is one tensor rather than a list of tensors as before. So we have one tensor, one tensor, one tensor, so there’s a list of three tensors. This time, we only have one tensor. However, it is 3, so one, two, three by 2, one, two, one, two, one, two by 3, one, two, three, one, two, three, one, two, three. So you can see our three tensors have now been combined into one tensor. Perfect! We were able to use the PyTorch stack operation to turn a list of PyTorch tensors into one tensor.
https://aiworkbox.com/lessons/turn-a-list-of-pytorch-tensors-into-one-tensor
CC-MAIN-2019-51
refinedweb
636
74.19
SPEAKR! Introduction: SPEAKR!... Time: < 1 Day Experience Level: Beginner Step 1: What You'll Need.... This project is based on the open source platform Arduino. The advantage here is that most (if not all) of the components can be swapped out with cheap alternatives. 1) Arduino Uno (Adafruit Link) and Arduino Software Most Arduinos or Arduino alternatives will work here. 2) Protoshield (or breadboard) (Adafruit Link) The only thing used here is one circuit to the speaker and its corresponding resistor. We liked using the protoshield because it attaches right to our Arduino UNO. 3) 9V Battery 2.1mm plug (Adafruit Link) This makes life easier. You can also use a wall adaptor, but then you have to bring your wall outside. 4) 1 100 ohm resistor (Amazon Link) You only need one of these, and the actual resistance governs how loud your speaker is (so go for 1k or less). (We purchased all of the above materials from our friends at Adafruit in their Arduino Starter Pack for $65. If you've never used an Arduino before, it has a ton of replay value, so we'd recommend taking the plunge for a starter kit similar to this one.) 5) 8ohm speaker (Amazon Link) This is where creativity comes in. Notice that though the speaker itself is less than a dollar, the shipping puts the total cost at about 14 bucks. We had a little fun here- we disassembled a talking greeting card and found one of these bad boys inside. The cards are available at most dollar stores. Radio Shack sells these speakers cheaply, as well. 6) Accelerometer - MMA7361 (Amazon Link) Here's another point where we encourage getting creative. We used a MMA7361 - available off Amazon and Ebay for relatively cheap, but chances are if you dig deep you can find a similar model online for under 5 dollars. Wiring and documentation may be a bit different, but what's the fun in a project if it's not challenging? 7) 200 ft of 22 gauge wire (or 8 pieces of 22 gauge wire at 25 ft) (Radio Shack Link) 24 gague wire works, as well. 20 is starting to get a bit on the thick side. The thinner the wire, the easier it is to fly the kite. There is a tradeoff with the length of wire, as the signal degrades over long distance. 8) Misc odds and ends a) Computer b) Soldering Kit c) Colored electrical tape (for marking wires- colored sharpies will work as well) d) Bubble wrap 3"x2" (for protection in the event of a not-so-smooth landing) e) Duct tape (to make the world go round) f) 9V battery (power the Arduino) g) Wire stripper (for electrical entertainment) h) Scissors (because cutting with our teeth is annoying) i) PC Board or breadboard (hold the accelerometer in place on the kite) j) Optional: female headers if you don't want to solder your accelerometer to your breadboard k) A windy day (or a car will work in a pinch) ...And last but not least, a kite of your choice! We bought ours for $5 at Five Below. The bigger the kite, the better it will support weight, but some larger kites are more difficult to fly (and more painful to watch crash if you're not experienced). Step 2: Assemble the Arduinooooo! Step 3: Cut and Label Wires 1. Cut the 200 ft of wire into eight 25 lines of wire -- our lines were 20ft each. 2. Strip the ends of the wire with a wire stripper. 3. Label the ends with tape - we found that colored electrical tape worked best. 4. Twist the wires together to get a rope-like structure. Step 4: Assemble Accelerometer and Wires 1. Solder female headers to PC board. 2. Solder wires to the PC board. 3. Solder the wires to the headers. 4. Cut the PC board (it's pretty big otherwise). 5. Snap in the accelerometer. Be sure to write down which wires are going to which ports! Note: You can solder the accelerometer directly to the PC board, but we chose to use female headers instead so we could re-use the accelerometer for our other projects. Step 5: Cut the Kite and Feed Wires Through 1. Assemble your kite. 2. Poke a hole near the cross of the kite with scissors. 3. Feed your rope of wires through the hole. 4. Secure the PC board with Accelerometer to the kite. We secured it to the long, vertical piece, right below the cross. How we secured it: - Electrical tape to secure the accelerometer to the PC board. - Wrapped bubble wrap around the accelerometer and PC board. - Taped the whole thing down with duct tape. Step 6: Insert Wires Into Arduino and Hook Up Speaker See the wire schematic above for assistance. This directly corresponds to the protoshield headpins. Step 7: Upload the Code Though you don't need a computer to run the project itself, programming the Arduino requires some software on a computer. Don't worry, it's all cross platform and easy to use. 1) Arduino software 2) Add Arduino MMA7361 Library ... or other, if you have a different accelerometer 3) Upload the following code to the Arduino board (it is based off the examples in the MMA 7361 Library): #include <AcceleroMMA7361.h> AcceleroMMA7361 accelero; int x; int y; int z; int pitch; void setup() { Serial.begin(9600); accelero.begin(13, 12, 11, 10, A0, A1, A2); accelero.setARefVoltage(3.3); //sets the AREF voltage to 3.3V accelero.setSensitivity(LOW); //sets the sensitivity to +/-6G accelero.calibrate(); } void loop() { x = accelero.getXRaw(); y = accelero.getYRaw(); z = accelero.getZRaw(); Serial.print("\nx: "); Serial.print(x); Serial.print("\ty: "); Serial.print(y); Serial.print("\tz: "); Serial.print(z); //finalzie pitch pitch = (x*y/z) * 50; //play the tone tone(3, pitch, 10); delay(10); //(make it readable) } Step 8: Plug in Battery Pack... ...with a 9V inside it. Step 9: Test & Enjoy! Turn on the battery pack and see if it works! Fly it by using the rope-wire. Be careful and don't let the Arduino go! This is a great project to do with your friends. We had lots of fun. It works best with a decent gust of wind outside. Plan a fun day out, have fun flying your kite, and capture the electronic sounds of the wind! More content regarding the Powerhouse Pirates and SPEAKR! can be found on our website. We'd love to see what you make and what sounds you get! Send us some of your content and we'll post it on our website or youtube account. Audio from flying the kite outside a car is attached! Enjoy! Special Thanks to: Marcela Zablah Matt Fuchs Marc de Vinck TE Class of '13 Mateys, we have video up on our website! Under the "Treasure" tab! Nice project! Hint: you can find the MMA7361 and other sensors for cheap on this website:... They ship from Canada. Benji Franklin would be proud of you. : ) He's an honoree Pirate. We think about him all the time. :) Aaarrrrrrrr! Great project mateys!!! Y'arghhh. Don't be too sad R2D2. Don't want you to get scurvy! @powerhousepirates; Hi! I tweeted this. Cool idea and great picture. I enjoyed the sound file and hope you'll post it, and more, on your site. Cheers! : ) Site Thanks! Our Captain will be remixing some music tonight! I`d love to a video of it in action. :) Our Sail Master is working on it as we parley! Congratulations on your first Instructable! You should mention it on the Rewards for New Authors page! Thanks Scoochmaroo! We did it! Nice 'ible! Next step, connecting it to a MIDI device. We'll make sure that the Captain gets on top of this tout de suite.
http://www.instructables.com/id/SPEAKR/
CC-MAIN-2017-47
refinedweb
1,312
84.17
C++ switch statement A switch statement allows a variable to be tested for equality against a list of values. Each value is called a case, and the variable being switched on is checked for each case. Syntax: The syntax for a switch statement in C++: #include <iostream> using namespace std; int main () { // local variable declaration: char grade = 'D'; switch(grade) { case 'A' : cout << "Excellent!" << endl; break; case 'B' : case 'C' : cout << "Well done" << endl; break; case 'D' : cout << "You passed" << endl; break; case 'F' : cout << "Better try again" << endl; break; default : cout << "Invalid grade" << endl; } cout << "Your grade is " << grade << endl; return 0; } This would produce the following result: You passed Your grade is D
http://www.tutorialspoint.com/cplusplus/cpp_switch_statement.htm
CC-MAIN-2015-22
refinedweb
115
67.93
A. To make TDD work well, I created a test file, t/examples/fizzbuzz.t. It’s a pure-PIR test script that uses the Parrot version of Test::More and produces Test::Harness compatible TAP. The program starts out simply: 1 #!parrot 2 3 .include 'hllmacros.pir' 4 5 .const string TESTS = 18 The .include directive works like #include in C; it inserts the contents of another source file into the current compilation unit. hllmacros.pir contains some useful macros. PIR is powerful, but it’s still an assembly language at heart. We’ve developed a few shortcuts for making PIR easier to write. Line 5 declares a constant–the number of tests to run. PIR has typed variables (actually four distinct types of registers), so all named variable declarations need a type declaration. 7 .sub 'main' :main 8 load_bytecode 'Test/More.pir' 9 load_bytecode 'examples/pir/fizzbuzz.pir' 10 11 .local pmc import_sub 12 .IMPORT( 'Test::More', 'plan', import_sub ) 13 .IMPORT( 'Test::More', 'is', import_sub ) 14 .IMPORT( 'Test::More', 'diag', import_sub ) 15 16 plan( TESTS ) 17 18 test_function( 'procedural' ) 19 test_function( 'keyed_array' ) 20 .end Lines 7 and 20 bracket the main entry point to this test file–a subroutine named main. The name is immaterial to everything but programmers; only the :main attribute tells Parrot that, when invoking this file directly, this is the entry point. Lines 8 and 9 load two PIR libraries. One is the pure-PIR testing system and the other is the file of FizzBuzz subroutines to test. Yes, load_bytecode is an inopportune name. Line 11 contains another variable declaration. It only exists to make the .IMPORT() macro work correctly. This time, the variable is a local variable–that is, it persists only through this compilation unit (the subroutine). It holds a PMC–a Parrot Magic Cookie, Parrot’s single non-primitive type. Lines 12 through 14 import three testing functions from Test::More. Though they come from a separate namespace, they’re available within this test file as if I had declared them locally. (Now you see why I used .IMPORT.) Line 16 starts the testing by telling the test library that I plan to run TESTS number of tests. Lines 18 and 19 call a the remaining function in this file with the name of the various FizzBuzz functions in examples/pir/fizzbuzz.pir. 22 .sub 'test_function' Note the lack of :main or any other attribute on line 22. 23 .param string func_name Line 23 demonstrates how to access subroutine arguments in PIR. The .param declaration is like .local in its scope, but beyond merely associating a name to a particular type of register within this compilation unit, it also accesses the appropriate value passed into the subroutine. 25 .local pmc test_sub 26 test_sub = find_global [ 'FizzBuzz' ], func_name With the function name passed in as an argument, this function needs to be able to access the function in my example PIR file. The find_global opcode takes a namespace key and a string and returns the PMC stored in that namespace under that name. If I have the name correct, test_sub will contain a Subroutine PMC that I can invoke. Parrot has first-class functions; this is more than just a function pointer. 27 diag( func_name ) Line 27 calls a Test::More function to show the name of the function being tested. It won’t interfere with the TAP output in any way. It’s just nice when running the tests directly for diagnostic purposes. 29 .local pmc results 30 results = test_sub( 10 ) 31 32 .local int count 33 count = results Lines 29 and 32 declare two variables which I used throughout the remainder of this function. results contains an Array PMC returned from the subroutine being tested. I use count to hold the number of elements in the results PMC. (If it’s not clear, the count = results line ends up calling the get_integer() vtable method on the PMC. That returns the number of elements for an Array-like PMC. How does Parrot know to call that method? count is an integer, and the assignment operation with an integer variable as an lvalue on a PMC rvalue turns into the get_integer() call.) All of my FizzBuzz subroutines currently use the same interface. They take an integer for the number of elements to create and return an Array PMC containing the strings for each element. This makes them all easier to test. 34 is( count, 10, 'test function should return the correct number of results' ) The first test is that calling the subroutine being tested and asking for ten elements should produce an Array PMC containing ten elements. 36 results = test_sub( 100 ) 37 count = results 38 is( count, 100, '... based on start and end' ) That didn’t quite meet the FizzBuzz challenge though, so I tried it again with 100 elements. (I also worked in a very small step when developing the previous step; I hardcoded the tested subroutine to return an Array of 10 null elements.) 40 .local string element 41 element = results[0] 42 is( element, '', '... with nothing for the first element' ) 43 44 element = results[2] 45 is( element, 'Fizz', '... with Fizz for the third element' ) 46 47 element = results[4] 48 is( element, 'Buzz', '... and Buzz for the fifth element' ) 49 50 element = results[14] 51 is( element, 'FizzBuzz', '... and FizzBuzz for the fifteenth element' ) 52 53 element = results[17] 54 is( element, 'Fizz', '... and Fizz for the eighteenth element' ) 55 56 element = results[19] 57 is( element, 'Buzz', '... and Buzz for the twentieth element' ) 58 59 element = results[29] 60 is( element, 'FizzBuzz', '... and FizzBuzz for the thirtieth element' ) This is the heart of the test now. With the returned results array, accessing individual elements should give the right answers for the FizzBuzz challenge. You’ve probably noticed, however, that all of the numbers are off by one. This is because Parrot’s array indices start at 0, not 1. I could have padded the array by one element if this were important, but noting it explicitly seemed sufficient. 61 .end Finally, this test function needs to end. That’s the test code. It doesn’t yet meet the FizzBuzz challenge, and that’s not only because I haven’t shown the code for the tested subroutines yet. This doesn’t print anything, unless you consider test output. Here’s the first part of examples/pir/fizzbuzz.pir: 1 .namespace [ 'FizzBuzz' ] The .namespace directive tells Parrot that all subsequent declarations should take place into a separate namespace. That’s unnecessary for simple example code, but it’s a good habit to avoid name clashes. (I wanted to write as real a program as possible. 3 .sub 'main' :main 4 .param pmc argv Like the test file, this code has its own main function as well. What happens when Parrot loads it from the test file? Absolutely nothing. Only the first :main is the main. However, this means that you can run this program on its own, because in that case this main function will execute. Line 4 gives access to the command-line arguments provided in that case. 6 .local string sub_name 7 sub_name = argv[1] 8 9 if sub_name goto load_sub 10 sub_name = 'procedural' The first element of argv is the program name. That’s immaterial here. The real arguments start at index 1. I expect this program to take one argument, the name of the testable subroutine to run. If there’s no subroutine provided, default to procedural. Line 9 might look a little awkward. Yes, goto is the main form of intrafunction control flow in Parrot. Remember that I called it an assembly language. 12 load_sub: 13 .local pmc sub_pmc 14 sub_pmc = find_global sub_name This code ought to look familiar; it uses the string name of a subroutine to (attempt to) fetch the associated Sub PMC. I say attempt because someone may provide an invalid name. I didn’t add any error handling to check for this condition, but the best approach is to make sure that sub_pmc contains a PMC that implements the Sub role somehow. Note that this use of find_global is different; I’m calling it here from the same namespace where I’ve declared the testable subroutines, so I don’t need the namespace key that I used in the test file. 16 .local pmc results 17 results = sub_pmc( 100 ) 18 19 .local pmc iter 20 iter = new .Iterator, results 21 22 .local string elem 23 .local int count 24 count = 1 Lines 16 and 17 should look familiar. Lines 19 and 20 create an iterator. This is another type of PMC, used to iterate through an aggregate. Lines 22 through 24 declare a couple of variables I want to use in the upcoming loop. 26 iter_start: 27 unless iter goto iter_end 28 elem = shift iter 29 30 print count 31 print ": " 32 print elem 33 print "\n" 34 35 inc count 36 goto iter_start Lines 26 through 28 are boilerplate for every use of a Parrot iterator that I’ve ever seen. 26 is the start of loop label, 27 is the end of loop condition, and 28 gets the current element out of the iterator into a PMC variable. Lines 30 through 33 print the number of the current element and the current element. If it’s blank, it’s blank. Otherwise, it contains Fizz, Buzz, or FizzBuzz. Line 35 increments the count variable, used for display purposes only (it started at one). Line 36 restarts the loop. 38 iter_end: 39 end 40 .end The loop and the function end here. The rest of the code actually does the important FizzBuzz work. I’ll show that tomorrow. In the meantime, I wonder how many people can build Parrot (or just read the documentation) and write their own version of these functions. procedural is easy, but keyed_array took a few minutes.
http://www.oreillynet.com/onlamp/blog/2007/03/testing_fizzbuzz_in_parrot.html
CC-MAIN-2014-10
refinedweb
1,650
66.64
Quoting Eric W. Biederman (ebiederm@xmission.com):> "Serge E. Hallyn" <serue@us.ibm.com> writes:> >>...> > I agree that setting the fs_user_namespace at mount time is fine.> However when we use a mount that a process in another user namespace> we need to not assume the uids are the same.> > Do you see the difference?Aaah - so you don't want to store this on the fs. So this is actuallylike what I had mentioned many many emails ago?> > much wider community on. I.e. the cifs and nifs folks. I haven't even> > googled to see what they say about it.> > Yes.> > >> Yes. Your patch does lay some interesting foundation work.> >> But we must not merge it upstream until we have a complete patchset> >> that handles all of the user namespace issues.> >> > Don't think Cedric expected this to be merged :) Just to start> > discussion, which it certainly did...> > If we could have [RFC] in front of these proof of concept patches> it would clear up a lot of confusion.Agreed.> > If we're going to talk about keys (which I'd like to) I think we need to> > decide whether we are just using them as an easy wider-than-uid> > identifier, or if we actually need cryptographic keys to prevent> > "identity theft" (heheh). I don't know that we need the latter for> > anything, but of course if we're going to try for a more general> > solution, then we do.> > Actually I was thinking something as mundane as a mapping table. This> uid in this namespace equals that uid in that other namespace.I see.That's also what I was imagining earlier, but it seems crass somehow.I'd almost prefer to just tag a mount with a user namespace implicitly,and only allow the mounter to say 'do' or 'don't' allow this to be readby users in another namespace. Then in the 'don't' case, user joe[1000] can't read files belonging to user jack [1000] in anothernamespace. It's stricter, but clean.But whether we do mapping tables or simple isolation, I do still likethe idea of pursuing the use of the keystore for global uids.thanks,-serge-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
https://lkml.org/lkml/2006/7/14/141
CC-MAIN-2015-48
refinedweb
394
73.88
Save Data to a JSON File with Gson Last modified: August 27, 2018 1. Overview Gson is a Java library that allows us to convert Java Objects into a JSON representation. We can also use it the other way around, to convert a JSON string to an equivalent Java object. In this quick tutorial, we'll find out how to save various Java data types as a JSON in a file. 2. Maven Dependencies First of all, we need to add the Gson dependency in pom.xml. This is available in Maven Central: <dependency> <groupId>com.google.code.gson</groupId> <artifactId>gson</artifactId> <version>2.8.5</version> </dependency> 3. Saving Data to a JSON File. Let's explore some examples with different data types in Java. 3.1. Primitives Saving primitives to a JSON file is pretty straight-forward using GSON: gson.toJson(123.45, new FileWriter(filePath)); Here, filePath denotes the location of the file. The file output will simply contain the primitive value: 123.45 3.2. Custom Objects Likewise, we can store objects as JSON. First, we'll create a simple User class: public class User { private int id; private String name; private transient String nationality; public User(int id, String name, String nationality) { this.id = id; this.name = name; this.nationality = nationality; } public User(int id, String name) { this(id, name, null); } } Now, we'll store a User object as a JSON: User user = new User(1, "Tom Smith", "American"); gson.toJson(user, new FileWriter(filePath)); The file output will be: {"id":1,"name":"Tom"}} We'll see how to include null fields in serialization later. 3.3. Collections We can store a collection of objects in a similar manner: User[] users = new User[] { new User(1, "Mike"), new User(2, "Tom") }; gson.toJson(users, new FileWriter(filePath)); In this case, the file output will be an array of User objects: [{"id":1,"name":"Mike"},{"id":2,"name":"Tom"}] 4. Using GsonBuilder(). The available options are listed here. 5. Conclusion In this quick article, we got an understanding of how to serialize various Java data types into a JSON file. To explore various articles on JSON, have a look at our other tutorials on this topic. As always, the code snippets are available in this GitHub repository.
https://www.baeldung.com/gson-save-file
CC-MAIN-2021-10
refinedweb
382
58.08
With deno continuing to grow in popularity, one of the most common questions/requests I see regarding deno is that there is an abundance of packages/frameworks/libs that have already been created for the nodejs runtime to solve common problems, is there a way to utilize these libs in deno. In this post you’ll learn how to use packages from npm with Deno! The deno import pattern requires you to import ES Modules (ESM) with the included file extension, for example: import { serve } from '[email protected]/http/server.ts'; Notice how the .ts TypeScript file extension is included in the import path, this server.ts file is an ESM. Most NPM packages are not built and export in this way, yet, meaning they cannot be directly imported in deno. In this blog, I am going to show you one way to handle this problem. We are going to import the momentjs lib as an ESM and utilize the lib to format a date and print it to the console. Let’s talk Pika CDN. To achieve this, we are going to use this incredibly awesome CDN called pika.dev. Pika is on a mission to make the web faster. That is a direct quote from their website: We’re on a mission to make the web 90% faster Pika hosts web-focused ESM packages in their CDN which is fantastic for us to use with deno as this means we can import npm packages that we already know and use in our deno runtime from Pika as an ESM. This is incredible. And, it even comes with TypeScript typings and support as pika hosts type definitions (think like install a @types/ package from definitely typed) for every package where type definitions are provided to them. It handles this through the X-TypeScript-Types header on the import so deno knows where to get to get the type definitions for the given package. Using an NPM Package in deno Lets use the Pika CDN to build a super simple deno runtime app that will format and do some simple date manipulation. To start, lets import momentjs from Pika: import moment from '[email protected]^2.26.0'; Then, initialize a date object: const today = new Date(); console.log('today is ', today.toDateString()); // Tue Jun 09 2020 Now lets use the imported momentjs lib to format the date/time into a string with this format: MM/DD/YYYY hh:mm a // initialise a moment instance from our date instance const todayMoment = moment(today); // format the moment instance with our given format string const todayFormatted = todayMoment.format('MM/DD/YYYY hh:mm a'); console.log('today, but formatted, is ', todayFormatted); // 06/09/2020 08:47 pm Look at that! We can now use the momentjs package in our deno app. Super easy. This was a great collaboration between the deno and pika CDN team to make this. Let’s manipulate our today instance even more and use some more momentjs functionality: // add a day to the date to get to tomorrow. const tomorrow = todayMoment.add(1, 'd'); console.log('tomorrow is ', tomorrow.format('YYYY-MM-DD')); // 2020-06-10 // get the first day of the current month from today const firstDayOfMonth = todayMoment.startOf('month'); console.log('the first day of the current month is ', firstDayOfMonth.format('YYYY-MM-DD')); //2020-06-01 // get the last day of the current month from today const lastDayOfMonth = todayMoment.endOf('month'); console.log('the last day of the current month is ', lastDayOfMonth.format('YYYY-MM-DD')); //2020-06-30 And that’s it! Now you’ve learned how to use an NPM package with deno, the world is your oyster. 🏆 Want to learn more JavaScript? Check out our JavaScript courses to fully learn the deep language basics, advanced patterns, functional and object-oriented programming paradigms and everything related to the DOM. A must-have series of courses for every serious JavaScript developer. Thanks for reading, I hope you enjoyed the Deno article! Follow me on Twitter and let me know how you enjoyed the post.
https://ultimatecourses.com/blog/deno-import-from-npm
CC-MAIN-2021-21
refinedweb
679
56.05
Analyses at the Cu–K, Fe–K and Mn–K edge were performed to study the green, marbled (green and yellow), blue and blackish (deep greyish olive green) glass slabs decorating three sectilia panels from the archaeological site of Faragola. Results indicate that all slabs were made by mixing siliceous sand with natron, sometimes probably mixed with small percentages of plant ash. Cu2+ and Pb antimonates should be responsible for the opaque green colours. The dark green and yellow portions of the marbled slabs are respectively comparable to the slabs comprising only one of these colours. Cu2+ together with Ca anti- monates probably produced light blue slabs, whereas cobalt was used to produce dark blue slabs. We consider it possible that the abundance ratio of Fe2+/Fe3+ and the complex Fe3+S2- would have an effect on the blackish slabs. The contribution of Mn cannot be ascertained even if it could have played a role in darkening glass colour. The comparison between the chemical composition of Faragola samples and several glass reference groups provided no conclusive evidence of provenance; whereas, the presence of a secondary local workshop can be hypothesized. INTRODUCTION The archaeological excavations at Faragola Ascoli Satriano (Foggia, Italy; Fig. 1 (A)) brought to light a huge, long-lasting rural settlement. Established in the Daunian period (fourth to third century bc), the settlement became a farmhouse during the Roman period (first century bc to third century ad), a villa in the Late Antique period (fourth to sixth century ad), and finally, a village in the early Medieval period (seventh to eighth century ad). Archaeometric research focused on three sectilia panels dated to the Late Antique villa. The sectilia panels were found along the central axis of the cenatio (Fig. 1 (B)), testifying to a complex design expressly requested by the upper class (Turchiano 2008 and references therein). The panels are made of red, orange, yellow, green, marbled, blue and blackish glass slabs, cut following a precise decorative motif (Fig. 1 (C) to 1 (E)). Other elements, such as central discs of porphyry and wolves’ teeth in coral breccia, further enrich the decorative design. This study examined the green, marbled, blue and deep Figure 1 (A) Faragola localization in southern Italy; (B) the cenatio; (C) Panel 1 (125 ¥ 70 cm); (D) Panel 2 (120 ¥ 64 cm); (E) Panel 3 (127 ¥ 86 cm); (F) light green, dark green, marbled, light blue, dark blue and blackish samples under examination. greyish olive green (from now on called blackish) slabs only (Fig. 1 (F)); the red, orange and yellow ones were the object of a previous work (Santagostino Barbone et al. 2008) to which the reader can refer for further information on the history of the settlement and for a detailed description of the sectilia panels. Given the rarity and prestige of the sectilia panels under examination, both the technological and the provenance issues are of great interest. As regards technology, the state of the art is surely advanced and allows most steps of the production cycle to be reconstructed. Bulk and point chemistry reveals fundamental data in order to characterize network formers, modifiers and stabilizers used to produce these slabs. Whether raw materials and/or recycled glass have been used is further possible to hypothesize, as well as to indicate the deliberateness or fortuitousness of some procedures. Based on available literature, green, blue and blackish glasses may have been ‘naturally’ coloured by the Fe oxides contaminating the sand. However, different colours would have required different firing conditions (in some cases also a glass batch with a specific chemical composition) suitable for reducing or oxidizing the colouring agent (see especially Pollard and Heron 1996; Mirti et al. 2001). The presence of both Fe and Mn is known to produce numerous effects on colour, mainly depending on component ratio and furnace atmosphere (see especially Mirti et al. 2001, 2002; Salviulo et al. 2004; Quartieri et al. 2005; Silvestri et al. 2005); while the presence of the ferric iron sulphide complex Fe3+–S2-, develops the amber chromophore during cooling, when aided by the pres- ence of alkali ions in the glass batch (Beerkens and Kahl 2002) and treated under reducing conditions during firing (Douglas and Zaman 1969; Schreurs and Brill 1984; Pollard and Heron 1996). Detected in red and orange slabs (see Santagostino Barbone et al. 2008 and references therein), copper is another common colouring agent that could be responsible for the green and blue colours as well. The blue slabs could be coloured by small amounts of divalent cobalt (Co2+). The absorption coefficient of the Co2+ ion is much greater than that of Cu2+ ions (see also Mirti et al. 2002): 200 ppm is sufficient to produce a dark blue colour. The provenance, supply (see, e.g., Kaczmarczyk 1986; Gratuze et al. 1996; Henderson 2003; Shortland et al. 2006) and treatment (see, e.g., Noll 1981; Warachim et al. 1985; Rehren 2001; Tite and Shortland 2003) of cobalt has been the subject of considerable speculation, chiefly in relation to the cobalt-blue glass of the Near and Middle Eastern Late Bronze Age. The presence or absence of Sb compounds (Ca and Pb antimonates) needs further investiga- tions. Ca antimonates were used to opacify and lighten the glass, usually forming during cooling through the precipitation of antimonate with the CaO contained in the glass (see especially Shortland 2002). Pb antimonates could have been produced through heating under oxidizing conditions of a lead glass batch to which stibnite had been added (see Henderson 1985), in order to produce yellow glasses. Due to the general lack of knowledge on this particular and special class of archaeological materials, the provenance issue is of particular interest and, at the same time, it represents the main difficulty to overcome. Data achieved in our previous work on red, orange and yellow slabs (Santagostino Barbone et al. 2008) led to the conclusions that numerous Mediterranean produc- tion areas could have equally been addressed for the production. Hence, the improvement of a compositional database on these sectilia is necessary, since only generic ‘trends’ and reference ‘models’ can be identified. OBJECTIVES Based on current knowledge on sectilia panels and glass production, this research is aimed at characterizing the glass slabs with two main objectives: the reconstruction of the technological cycle adopted for the production; and the provenance assessment. From a technological standpoint, whether siliceous sands or quartz pebbles were used as network formers and whether natron or plant ash was used as a flux is a matter of investigation. The identification of colouring and opacifying agents is further examined to understand how the absolute quantities, proportions and oxidation states of colouring agents affected the final colour of glass slabs. These results will be of further help in provenance determination, the latter being addressed by comparing the bulk chemical compositions of the Faragola slabs to the known reference groups on glass vessels, window panes and tesserae. In summary, the discovery of Faragolas’ panels has raised an important question: do we have to think of local (primary/ secondary?) production made by specialized artisans using local/foreign raw materials or is it more likely that a high-rank Roman officer imported luxury items made somewhere else? The possible answers would depict different scenarios in terms of social life and economic trades. MATERIALS The three panels were carefully sampled with the aid of a restorer to produce a representative sample set while minimizing the impact on the panels themselves. Representativeness was achieved by sampling the entire range of colours displayed by each panel. The total sample set consisted of 30 slabs divided as follows: four red, three orange, four yellow (published in Santagostino Barbone et al. 2008), three light green, five dark green, two marbled, four light blue, three dark blue and two blackish slabs (object of the present study, listed in Table 1). Although a few samples, labelled with the name of the stratigraphic unit in which they were found, cannot be attributed to any one particular panel, they were included in the sample set to increase the statistics on the colours used. Macroscopically (Fig. 1), light green slabs FN4 and FC5 appear rather similar and homo- geneous, whereas FE3 is characterized by a yellowish-green hue and shows randomly dis- persed yellow spots. Dark green slab samples appear very similar and homogeneous; only sample FS4 is characterized by a yellowish hue. Marbled slabs show alternating green and yellow bands. The width of each band differs from sample to sample, but colours, hues and macroscopic textures are identical in all samples. Among light blue slabs, FC8 and FS5 show the same blue colour and hue, FN6 shows a higher degree of surface alteration, whereas FE6 is highly porous and has numerous yellow spots. Dark blue slabs show the same intense blue colour. Lastly, blackish slabs show a fresh nucleus of deep greyish olive green colour and an external layer of blackish colour. EXPERIMENTAL Samples were analysed by means of both bulk and point chemical analyses. However, given the preciousness of the materials, their conservation state and the weight required for bulk chemistry, it was not possible to apply all techniques to each sample (Table 1). Bulk chemical data on the total sample were acquired by inductively coupled plasma optical emission spectrometry (ICP– OES) and inductively coupled plasma mass spectrometry (ICP–MS) at Activation Laboratories (Ontario, Canada). Major, minor and trace elements were determined through different sample preparation methods and techniques, to determine the scope of elements and the detection limits (indicated in Table 2). Samples were mechanically cleaned in order to remove the alteration layer, ground and dissolved through both lithium metaborate fusion and acid attack. All major elements and a few minor elements (Ba, Be, Sr, V, Y and Zr) were analysed using a Perkin-Elmer Optima 2000 DV inductively coupled plasma optical emission spectrometer (ICP–OES). The remaining minor, trace and rare earth elements (REEs) were analysed using a Perkin-Elmer Elan 6100 inductively coupled plasma mass spectrometer (ICP–MS). Loss on ignition (LOI) was deter- mined (1050°C for 2 h). Scanning electron microscopy (SEM-EDS) was used for textural observations. The scanning electron microscope was a Philips XL30, equipped with an energy dispersive spectrometer (EDS) Philips EDAX DX4. A variety of natural and synthetic materials were used as primary and quality control standards. Operating conditions were as follows: accelerating voltage 20 kV, beam current ~30–40 mA, working distance 10–15 mm. Quantitative analyses with the theoretical inner pattern were obtained using the ZAF method of correction. Electron microprobe (EMP) was used to obtain the composition of glassy matrices and particles. The electron microprobe (EMP) was a CAMECA SX50, equipped with four wavelength-dispersive spectrometers (WDSs), was used under the following operating condi- tions: major and minor elements: 15 kV, beam current at 15 nA, beam diameter focused at not less than 10 mm, with a counting time for peak and background of 10 s each; trace elements: 20 kV and 20 nA sample current with a counting time for peak and background of 20 s each. In order to minimize the loss of alkali elements, Na and K were analysed in the first run and the beam diameter was focused at not less than 10 mm, with a counting time of 5 s for peak and background). X-ray counts were converted to oxide weight percentages using the PAP (CAMECA) correction program. Synthetic pure oxides were used as standards for Al, Fe, Mn, Ti Det.limit SiO2 TiO2 Al2O3 Fe2O3 MnO MgO CaO Na2O K2O P2O5 SO3 LOI Cu Pb Sb Co Sn Zn Cr Ni Ag Sc Be V Ba Sr % % % % % % % % % % % % ppm ppm ppm ppm ppm ppm ppm ppm ppm ppm ppm ppm ppm ppm 0.01 0.001 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 10 5 0.5 1 1 30 20 20 1 0.1 1 5 3 2 Light green FN 4 67.20 0.086 2.70 0.65 0.406 0.66 6.88 17.02 0.73 0.11 0.62 0.50 13,517 17,471 5618.9 10 994 152 12 9 8.1 1.7 1 18 152 407 FC 5 67.20 0.087 2.35 0.65 0.184 0.58 5.93 17.84 0.55 0.10 0.57 1.12 13,187 15,781 4274.0 5 694 202 11 13 6.2 1.7 1 16 125 376 FE 3 67.84 0.101 2.68 0.69 0.252 0.68 6.34 17.68 0.72 0.11 0.57 0.01 11,571 19,152 5585.5 9 898 110 11 13 7.0 1.8 1 17 147 418 Dark green FN 5 68.33 0.083 2.51 0.54 0.287 0.54 6.19 17.90 0.57 0.09 0.50 0.62 11,197 7606 4381.3 6 547 78 11 11 5.0 1.6 1 17 148 399 FC 6 68.56 0.081 2.22 0.55 0.291 0.56 6.18 17.85 0.53 0.08 0.48 1.32 11,521 8327 4765.2 5 550 69 11 7 5.4 1.5 1 17 148 410 FE 4 68.88 0.085 2.59 0.60 0.327 0.58 6.54 17.87 0.56 0.08 0.51 0.01 8630 4927 3664.6 4 481 63 13 15 3.5 1.8 1 18 168 444 Light blue FC 8 67.23 0.082 2.39 0.61 0.216 0.60 6.09 18.25 0.65 0.12 0.76 1.20 22,830 2430 5382.4 6 935 398 11 17 8.2 1.4 0 14 139 359 FS 5 68.18 0.084 2.23 0.60 0.219 0.62 6.22 18.19 0.65 0.12 0.80 0.01 25,591 3770 6388.1 6 >1000 270 11 11 8.8 1.8 1 16 132 378 FE 6 69.38 0.088 2.37 0.58 0.161 0.60 6.28 18.91 0.65 0.09 0.67 0.01 16,777 1130 5675.6 4 764 108 12 12 6.5 1.7 1 16 137 402 Dark blue FS 6 68.70 0.094 2.30 0.86 0.409 0.63 6.65 16.49 0.49 0.07 0.54 1.86 884 1170 7084.7 212 101 79 12 17 1.0 1.5 1 18 146 427 FE 8 68.54 0.081 2.29 0.74 0.699 0.72 7.12 17.60 0.62 0.14 0.56 0.01 700 1440 10649.7 334 49 87 14 29 0.8 1.5 1 37 150 861 Blackish FC 9 67.69 0.125 2.79 1.06 1.085 1.33 8.25 15.66 1.36 0.49 0.46 0.01 295 402 732.9 16 36 73 15 22 0.7 1.8 1 34 291 530 Det.limit Y Zr Ga As Rb Nb Mo In Cs La Ce Pr Nd Sm Eu Gd Tb Dy Ho Er Tm Yb Lu Hf Ta W Bi Th U E. Gliozzo et al. ppm ppm ppm ppm ppm ppm ppm ppm ppm ppm ppm ppm ppm ppm ppm ppm ppm ppm ppm ppm ppm ppm ppm ppm ppm ppm ppm ppm ppm 2 4 1 5 2 1 2 0.2 1 0.1 0.1 0.05 0.1 0.1 0.05 0.1 0.1 0.1 0.1 0.1 0.05 0.1 0.04 0.2 0.1 1 0.4 0.1 0.1 Light green FN 4 1 33 3 28 11 2 2 2.6 0.6 9.3 12.1 1.51 5.7 1.1 0.32 1.2 0.2 1.1 0.2 0.6 0.08 0.5 0.07 1.1 0.3 1 0.8 0.4 1.1 FC 5 1 40 3 33 11 2 2 1.7 1.0 9.9 10.1 1.31 5.1 1.1 0.31 1.1 0.2 1.0 0.2 0.5 0.07 0.5 0.07 1.0 0.3 1 1.0 0.7 1.0 FE 3 1 38 3 29 12 2 3 2.2 0.6 9.4 11.4 1.51 5.8 1.2 0.35 1.3 0.2 1.1 0.2 0.6 0.08 0.5 0.07 1.1 0.3 1 0.8 0.7 1.1 Dark green FN 5 1 38 3 26 11 2 3 1.3 0.7 8.6 10.5 1.37 5.4 1.1 0.36 1.2 0.2 1.0 0.2 0.5 0.07 0.5 0.07 1.1 0.3 1 0.4 0.1 1.0 FC 6 0 35 3 27 11 2 3 1.3 0.4 8.6 10.4 1.36 5.4 1.1 0.33 1.2 0.2 1.0 0.2 0.6 0.07 0.5 0.07 1.2 0.3 1 0.5 0.4 1.1 FE 4 0 35 3 28 12 1 5 1.2 0.7 8.9 11.4 1.48 5.8 1.2 0.36 1.3 0.2 1.2 0.2 0.6 0.07 0.5 0.08 0.9 0.3 2 0.4 0.3 0.9 Light blue FC 8 1 39 3 16 8 1 4 2.4 0.6 5.4 10.8 1.18 4.7 1.0 0.30 1.1 0.2 1.0 0.2 0.5 0.07 0.5 0.07 1.0 0.3 1 0.1 0.6 1.1 FS 5 0 35 3 51 13 2 4 3.1 0.5 9.4 10.3 1.34 5.3 1.1 0.31 1.2 0.2 1.0 0.2 0.5 0.07 0.5 0.07 1.1 0.3 1 1.0 0.4 1.1 FE 6 1 36 3 38 13 1 3 1.9 0.5 9.4 11.2 1.43 5.6 1.2 0.34 1.2 0.2 1.1 0.2 0.6 0.07 0.5 0.07 1.1 0.3 0 0.7 0.5 1.1 Dark blue FS 6 1 44 4 26 7 2 3 0.4 0.6 8.4 9.9 1.30 5.2 1.1 0.33 1.1 0.2 1.1 0.2 0.6 0.08 0.5 0.08 1.1 0.4 2 0.1 0.7 1.0 FE 8 1 38 4 86 11 1 5 0.2 0.2 6.8 13.4 1.45 5.8 1.3 0.44 1.2 0.2 1.0 0.2 0.7 0.11 0.6 0.09 1.1 0.4 0 0.2 0.4 1.0 Blackish FC 9 1 51 4 6 8 2 3 <0.2 0.4 6.5 14.0 1.50 6.1 1.3 0.42 1.4 0.2 1.2 0.2 0.7 0.08 0.6 0.08 1.4 0.4 7 0.1 0.5 0.9 FE 9 1 49 4 7 10 2 4 <0.2 0.3 11.2 13.9 1.78 7.0 1.5 0.48 1.3 0.2 1.3 0.2 0.8 0.09 0.6 0.10 1.5 0.4 9 0 0.6 1.0 The sectilia panels of Faragola (Ascoli Satriano, southern Italy) 395 and Sn, wollastonite for Si and Ca, albite for Na, orthoclase for K, periclase for Mg, vanadinite for Cl, orthoclase for K, apatite for P, sphalerite for S and Zn, metallic copper and cobalt for Cu and Co, respectively, stibnite for Sb. Detection limits were 0.1% for Si, Na, Ca, Al, K, Mg, Fe, Mn, Ti, P, S and Cl, 500 ppm for Cu, 250 ppm for Co, 490 ppm for Sn, 600 ppm for Sb, 670 ppm for Zn and 1100 ppm for Pb. PAP (CAMECA) software was used for correction. Precision was within 1% for major elements, about 3–4% for minor elements and about 8% for trace elements. Cu, Fe and Mn oxidation states were investigated by means of X-ray absorption spectroscopy (XAS). Measurements were carried out in two distinct experimental sessions: the first at the Cu–K and the second at the Fe–K and Mn–K edge. In both cases, the GILDA-CRG beamline (d’Acapito et al. 1998b) was used with the ESRF storage ring running at 6 GeV. The monochro- mator used a pair of Si (111) crystals for the measurements at the Cu–K edge and Si (311) crystals for the measurements at the Fe–K and Mn–K edge. In both cases it was run in dynamically focusing mode (Pascarelli et al. 1996). A pair of Pd-coated mirrors working in grazing inci- dence (Ecutoff = 18 keV) was used to reject harmonics. Reference spectra of metallic Cu and Fe, cuprite (Cu+12O), tenorite (Cu+2O), hematite (Fe3+2O3), hercynite (Fe2+Al2O4), spessartine (Mn2+3Al2(SiO4)3), rhodochrosite (Mn2+(CO3)), bixbyite ((Mn3+,Fe3+)2O3) and pyrolusite (Mn4+O2) were collected in transmission mode, whereas the spectrum of a soda-lime glass doped with Cu by ion exchange (Glass M1) was collected in fluorescence mode. This glass was obtained by a binary ion exchange as described in Gonella et al. (1998). Data collection was carried out at room temperature in fluorescence mode using a 13-element high-purity Ge detector. The energy scales were calibrated by attributing the values from Bearden and Burr (1967) to the first inflection point of the absorption spectra of the metallic foils: Eedge = 8979 eV for Cu–K edge, Eedge = 7112 eV for Fe–K edge and Eedge = 6539 eV for Mn–K edge. The procedure was repeated during data collection to check the stability of the energy calibration. The measurement of the absorption coefficient from the samples was carried out in fluorescence mode using a high-purity Ge energy-resolving detector with an average energy resolution DE @ 200 eV. The –Ka emission line of the various elements considered in the study was selected for the collection of spectra. The maximum count rate per element was limited to 50 kcps in order to avoid a non-linear response of the detector. The incident beam was monitored through a N2-filled ion chamber. For each sample three spectra were collected and averaged in order to minimize noise. Detailed X-ray absorption near edge structure (XANES) data at the Fe–K and Mn–K edges were collected with an energy step of 0.2 eV and were normalized to the edge step. The calibration of the energy scale was checked before and after each scan by collecting the absorp- tion from a metallic foil (Fe or Mn). The analysis consisted of fitting the peaks that appear in the region just before the edge by a series of Pseudo-Voigt lines plus an arctangent curve to mimic the absorption from the states in the continuum region. The amplitude and position of these lines reveal the valence state and the local symmetry of the metal (Galoisy et al. 2001). XAS spectra were extracted following the standard procedure (Lee et al. 1981), i.e., by subtracting a linear background from the pre-edge region and a spline approximation from the post-edge region using the ATHENA (Ravel and Newville 2005) code. The XANES spectra were obtained by normalizing the pre-edge subtracted spectra to exhibit an edge jump of J = 1. The quantitative analysis was based on ab-initio calculations of the backscattering phase and ampli- tude functions using the FEFF8.10 code (Ankudinov et al. 1998). Atomic clusters of 6 Å were created starting from the known crystallographic structures of metallic Cu, Cu2O, Fe2O3, Mn2O3. Potentials were calculated through the Muffin Tin approximation, and using the complex Hedin- Lunqvist approximation for the exchange part (Ankudinov et al. 1998). The data were Fourier transformed in the interval k = [2.5, . . . , 9.5] Å-1 using a k3 weight and a Hanning window function. The fits to the theoretical models were carried out in R space with the ARTHEMIS code (Ravel and Newville 2005). RESULTS Figure 2 SEM–BSE images of (A) light green sample FN4 where the glassy matrix is characterized by different shades of brighter or darker grey; (B) the homogeneous matrix of dark green sample FS3; (C) light blue sample FC8 where Cu and Ca-antimoniate particles are diffused in the glassy matrix; (D) dark blue tessera FS6 where Ca-antimonates are dispersed in the striped glassy matrix; (E) marbled tessera FC7 where brighter and darker bands correspond to yellow and green areas, respectively; (F) blackish tessera FC9, bubbled and highly altered along the external surface. also shows a Sn oxide of appreciable dimensions (50 mm); whereas, rounded Mn oxides (about 35 mm) have been observed in dark blue slabs. The glass matrix is homogeneous in both colours. In marbled slabs, the dark green and yellow bands are clearly distinguished (Fig. 2 (E)). The former can be directly compared to dark green slabs; the latter are characterized by numerous Pb antimonates spread throughout the glassy matrix, as noted in yellow slabs (Santagostino Barbone et al. 2008). The yellow bands show rare, large nuggets of Fe oxides, whereas the green ones contain rare nuggets of Sn oxide (possibly cassiterite). Figure 3 SEM–BSE images of (A) Sn-oxide in dark green sample FS3; (B) Ca-antimonates in dark blue sample FS6; (C) elongated crystals of wollastonite (darker), together with Cu particles (brighter) in light blue sample FN6; (D) relict alkaline feldspar along the interface between the yellow and the green areas in marbled tesserae in sample FC7; (E) acicular Ca–Na silicate crystals in blackish sample FC9; (F) polygonal Ca–Na silicate in light green sample FN4. In blackish slabs, the fresh nucleus is characterized by a rather homogeneous glass matrix, surrounded by an altered external layer (Fig. 2 (F)), often reaching a considerable thickness. A Cu sulphide nugget of appreciable dimensions and a few Fe-Ti oxides (ilmenite?) were dispersed in the glass matrix. Rare relict phases mainly consist of plagioclases in dark/light green and blue slabs. Light blue slabs further show sporadic crystals of quartz. Alkaline-feldspar relics are rather common in the yellow bands of marbled slabs but are rare in the green ones (Fig. 3 (D)). As newly formed phases, Ca–Na silicate crystals with acicular habit are present in light green, light blue, marbled (both yellow and green bands) and blackish slabs (Fig. 3 (E)), while crystals with tabular habit are observed in dark green and dark blue slabs. Wollastonite crystals are polygonal in dark green and dark blue slabs, as well as in the green bands of marbled slabs; both tabular and polygonal (Fig. 3 (F)) habit can be observed in light green slabs. A few feldspathoid crystals are also present in marbled slabs and in dark green slabs. Table 3 EMPA results. Glass matrix mean composition (n = x stands for point analyses per sample) and standard deviations (SDs) of samples grouped by colour. Columns: (1) samples FN4 and FC5 (light green); (2) samples FN5, FC6, FS3 (dark green); (3) samples FN6, FC8, FS5 (light blue); (4) samples FS6 and FE8 (dark blue); (5a–b) samples FC7, FE5 (marbled green and yellow); and (6) sample FC9 (blackish). Crystals and aggregates columns: (7–8) Sn-rich Pb antimonates; (9) Ca antimonate; (10–11) Ca–Na silicates; (12) feldspathoid 1 2 3 4 5a 5b 6 7 8 9 10 11 12 Light green Dark green Light blue Dark blue Marb. green Marb. yellow Blackish SiO2 67.06 0.70 68.39 0.68 68.06 1.15 68.42 0.92 68.12 0.38 65.14 0.45 66.43 0.21 9.65 6.36 – 61.57 67.68 33.81 TiO2 0.08 0.03 0.09 0.02 0.07 0.02 0.08 0.04 0.07 0.03 0.07 0.02 0.12 0.02 – – – – – – MgO 0.61 0.05 0.59 0.05 0.56 0.04 0.68 0.11 0.61 0.05 0.49 0.05 0.86 0.03 – 0.64 – 0.72 0.49 – CaO 6.11 0.38 6.32 0.28 6.09 0.16 6.50 0.25 6.78 0.28 5.87 0.12 8.14 0.09 1.87 2.08 26.22 24.91 11.98 5.98 Na2O 16.37 0.41 16.66 0.31 16.49 0.25 16.87 0.30 16.58 0.27 15.81 0.38 16.32 0.11 3.37 2.58 – 10.47 13.15 17.09 K2O 0.55 0.06 0.55 0.04 0.57 0.04 0.52 0.06 0.58 0.03 0.46 0.05 1.27 0.01 – – – – 0.33 – P2O5 0.07 0.04 0.11 0.03 0.09 0.04 0.10 0.05 0.11 0.03 0.06 0.02 0.46 0.03 – – – – – – SO3 0.29 0.06 0.26 0.06 0.33 0.05 0.33 0.04 0.30 0.06 0.30 0.07 0.26 0.05 – – – – – 15.71 Cl2O 1.47 0.11 1.49 0.04 1.37 0.06 1.40 0.16 1.44 0.11 1.37 0.05 1.46 0.07 – – – – 1.21 – CoO 0.01 0.02 0.01 0.01 0.00 0.01 0.04 0.02 0.02 0.02 0.00 0.01 0.00 0.00 – – – – – – PbO 2.32 0.53 1.04 0.09 0.35 0.09 0.25 0.09 0.54 0.10 8.60 0.37 0.04 0.04 50.48 51.78 – 0.67 2.17 – Sb2O3 0.71 0.28 0.52 0.08 0.83 0.30 1.12 0.18 0.60 0.03 0.87 0.65 0.09 0.03 20.42 19.42 73.78 – – – SnO2 0.13 0.05 0.06 0.02 0.15 0.04 0.02 0.01 0.10 0.05 0.17 0.30 0.01 0.01 11.34 14.25 – – – – CuO 1.52 0.09 1.20 0.28 2.53 0.48 0.12 0.03 1.59 0.11 0.14 0.11 0.02 0.01 – – – – 0.94 – 100.0 100.2 100.2 99.6 100.5 101.7 99.7 99.26 99.27 100 100.01 99.99 99.99 The sectilia panels of Faragola (Ascoli Satriano, southern Italy) 401 combeite (Na2Ca2Si3O9) or ‘devitrite’ (Na2Ca2Si5O16). Similar to the yellow slabs (see Santagos- tino Barbone et al. 2008), feldspathoids show a similar composition to that of sodalite group phases, although much richer in S and lacking Cl (Table 3, column 12). Figure 4 (A) Normalized XANES spectra of the samples plus the model compound M1. The dashed line indicates the position of the edge at 8982 eV. (B) Background subtracted EXAFS data of the samples. The pre-edge was approximated by a straight line whereas the atomic background was reproduced with a spline with parameter Rbkg = 1 Å. (C) Fourier transforms of the EXAFS data. The transform was carried out in the range k = 2.5–9.5 Å-1 with a Hanning window and a k3 weighting. (D) XANES full spectra at the Fe–K edge. The line indicates the edge position of Fe2O3 located at 7123 eV. (E) Detail of the pre-edge peak (points) with the best fit curves (lines). The vertical line here marks the position of the main single peaks at 7114 eV. (F) EXAFS data. (G) Fourier transforms of the EXAFS data (lines) with the best fitting curves (dots). (H) XANES full spectra at the Mn–K edge. The line indicates the edge position of rhodochrosite located at 6545 eV. (I) Detail of the pre-edge peak (points) with the best fit curves (lines). The vertical line here marks the position of the main single peaks at 6540 eV. (J) EXAFS data.(K) Fourier transforms of the EXAFS data (lines) with the best fitting curves (dots). The sectilia panels of Faragola (Ascoli Satriano, southern Italy) 403 configuration is slightly bent, thereby affecting the intensity of resonance. Sample FE3 (light green) shows a peak at 2.8 Å, which coincides with the location of the Cu–Cu peak in Cu2O; however, the fit with this contribution was extremely unstable. A Cu2O phase may thus be present in only small amounts (<10%). The predominant phase for Cu is Cu+, and this ion is known not to exhibit absorption bands in the visible range. Colouration must thus be attributed to other ions or to small (<10%) amounts of Cu2+ that cannot be separated from the dominant phase. Fe–K edge on samples FC9, FE7, FE3, FS5 and FN5 The XANES spectra of the samples are provided in Figure 4 (D) whereas Figure 4 (E) shows a detail of the pre-edge region. The edge position (first inflection point of the absorption coefficient) corresponds to that of Fe2O3 in all cases with the exception of FC9 (blackish) where the edge appears to rise at a lower energy. Observing the pre-edge peaks (see Table 5) all samples exhibit a well-defined single peak with an amplitude of about 10% of the total edge jump and located at 7114 eV: these values are typical of Fe3+ ions in a tetrahedral site (Galoisy et al. 2001). In sample FC9 (blackish), the peak at 7114 eV is less intense with respect to the other samples (only 6%) and a shoulder can be observed at 7112 eV. This value is typical for Fe2+ ions (Galoisy et al. 2001). EXAFS data are shown in Figure 4 (F). In the EXAFS spectrum a single oscillation is visible whereas in the Fourier transform (Fig. 4 (G)) a shoulder is also detected after the main peak. This corresponds to coordination with O atoms plus a smaller signal that, after various tests, we have attributed to Fe second neighbours. The results of the quantitative EXAFS analysis are shown in Table 6, where it is possible to note that the first coordination shell consists of four O atoms confirming the indication from XANES of a tetrahedral coordination. Sample FC9 (blackish) shows also an elongation of the bond length compatible with the fact that Fe2+ possesses a larger ionic radius than Fe3+. Considering also the XANES data we can assume that there is a consid- erable presence of Fe2+ in a tetrahedral site and the ratio Fe2+/Fe3+ is around 0.2–0.4. In all cases the second shell is made up of Fe ions at 2.93–2.96 Å. Mn–K edge on samples FC9 and FE7 The XANES spectra are shown in Figure 4 (H) and 4 (I). The edge position of the samples coincides with that in the rhodochrosite mineral (octahedral Table 7 Fits of the pre-edge peaks, in the Mn–K edge spectra. The values of amplitude and positions are given for all the samples. The sigma value was in all cases 0.9 1 0.1 eV Mn2+) and is located at lower energies than that of Mn3+ in bixbyite or Mn4+ in pyrolusite. The peaks in the pre-edge region have a position of 6540 eV and an amplitude of 5–9% (Table 7). Their energy position indicates the presence of Mn2+ as evidenced in Quartieri et al. (2005) whereas the higher amplitude of the peak with respect to the rhodochrosite suggests the presence of a tetrahedral site for Mn. The EXAFS spectra with the related Fourier transforms are shown in Figure 4 (J) and 4 (K). In the EXAFS spectrum, marked differences between FC9 (blackish) and FE7 (dark blue) are even more evident than in the FT. Here the former sample exhibits a strong peak at 2.5 Å that can be reproduced with a Mn–Mn coordination and that is absent in the latter sample. The results of the quantitative data analysis are provided in Table 8. The coordi- nation number for the O first neighbours is around four so confirming the hypothesis from XANES data of tetrahedral Mn. The further Mn shell is only present in sample FC9 (blackish). DISCUSSION Provenance The chemical composition of the Faragola slabs was compared to those reported for several reference groups currently available for Roman and Late Antique glass (Brill 1988; Freestone et al. 2000; Mirti et al. 2000; Foy et al. 2003; Picon and Vichy 2003; Uboldi and Verità 2003; Henderson et al. 2004; Wolf et al. 2005). Based on all major element contents, it is impossible to assign the Faragola slabs to the products of any one particular workshop. Significant but incon- clusive similarities have been found with ‘Group 3’ only, which includes numerous Roman and Early Medieval glass specimens found in the West and produced with Belus sand (Foy et al. 2003; Picon and Vichy 2003). Only limited comparisons can be made with glass tesserae reference groups too. Composi- tional differences are too vast to compare the Faragola slabs with the Shikmona (Israel, fifth century), Hosios Loukas (Greece, 10th century) and San Marco (Venice, Italy 11th–13th century) contexts analysed through SEM-EDS by Freestone et al. (1990). The Kenchreai panels analysed through atomic absorption and emission spectrography by Brill (1976) show similarities in the relative proportion of some major element contents, but absolute values differ greatly. For instance, the Ca:Na ratio is about 1:3.5 in the green samples of both the Kenchreai and Faragola panels, whereas absolute SiO2 values vary up to 16 wt%. The glass tesserae of Antioch (first to fourth century ad), of the ‘Casa della Fontana Piccola’ at Pompeii and of Roman mosaic vessel ware (Metropolitan Museum of Art) analysed through SEM-EDS and EMP by Mass et al. (1998) appear to have very similar Na2O, CaO, K2O and MgO contents, whereas SiO2 and Al2O3 contents are consistently higher (4–9 wt% higher for SiO2, and 0.3–0.9 wt% higher for Al2O3). Among Medieval Torcello glass tesserae (Andreescu-Treadgold and Henderson 2006), comparison can be drawn with ‘natron glasses’, ‘mixed natron–plant ash glasses’ and partially with ‘plant ash glass 1 and 2’ groups without finding a strict correlation. Finally, minor and trace element contents may suggest the formulation of a different provenance hypothesis, distinguishing black- ish slabs and maybe also dark blue slabs from all the others. The colours of the slabs varied greatly according to atmospheric conditions within the kiln and are therefore discussed separately. Green slabs Based on available analytical data, the copper contained in the green Faragola slabs is mainly Cu1+ and is thus unable to produce any colour. However, considering the detection limit of XAS measurements (<10 wt%), we cannot exclude the presence of small amounts of Cu2+ sufficient to produce a transparent blue colour. Pb antimonates probably promoted the change from a transparent blue to an opaque, light green colour. Moreover, the contribution of iron ions in the development of green hues should also be taken into consider- ation. The light green slabs differ from the dark green ones for their absolute colouring agent content: the dark slabs contain fewer Cu and Pb antimonate particles than the light ones. The provenance and preparation of Pb antimonates are difficult to reconstruct, but their compositions suggest the introduction of two different compounds (Pb-based and Sb-based, respectively) into the batch: the Pb:Sb ratio of the particles varies considerably and never corresponds to the stoichiometric formula of any mineral. As the Pb–Sb binary diagram in Figure 5 (A) shows that Pb and Sb contents are not linearly correlated, it is unlikely that a single mineral phase was used. Moreover, Pb antimonates contain significant amounts of Sn. Given the late chronology of the Faragola panels, the deliberate addition of a Sn compound should be considered; nevertheless, its low contents suggest the likely addition of scraps of leaded bronze (see also Mirti et al. 2000). Note that the slabs with the highest Sn contents show a corresponding increase in Sb, Ag and Fe content, perhaps indicating a similar ore deposit. Sn, Ag, Ag–Sb, Ag–Pb and Sn–Ag mineralizations commonly occur in Sn–Ag ore provinces, such as those of the Rudny Moun- tains, Cornwall, Yakutia, South Pamir, Middle Asia, Altai and Mongolia (see Borisenko et al. 2001). Marbled slabs The composition of the yellow bands and green bands is comparable to that of the green slabs and yellow slabs (Santagostino Barbone et al. 2008), suggesting that the two batches were produced separately. The texture suggests that the two slab types were mixed together in the last stage of production, and the difference between Sb contents in the yellow bands of the marbled slabs and those in the yellow slabs indicates that the two were made differently. Blue slabs As for the light blue slabs, the absolute quantity of CuO testifies to the deliberate addition of this component to the glass batch. The light blue slabs are mainly characterized by the presence of Cu1+. However, as in the case of the green slabs, a small quantity of Cu2+ could produce a transparent blue colour. Ca antimonates further opacified the glass and lightened its colour. In dark blue slabs, the Co2+ ion played a major role in colour development. Ca antimonates were used to opacify the glass. As for the provenance of cobalt and antimony used to produce the Faragola panels, once again there is no straightforward answer. Based on the Late Antique chronology of these finds, a wide range of possible supply areas can be hypothesized. Especially considering cobalt possible sources, the sample set has to be considered insufficient in order to draw significant correlations between cobalt and those oxides included in cobalt-bearing minerals (e.g., Ni or Zn) which could trace back to the source material and area (see, e.g., Henderson 2003). Moreover, the low minor and trace element contents are of no help in formulating plausible hypotheses, even if the supposed provenance of these panels is Egyptian (Brill 1976). As for technology, the adopted process (e.g., a two-stage process, mixing a cobalt-blue frit with sand and natron) cannot be ruled Figure 5 The chemical compositions of Faragolas slabs (excluding dark orange and marbled samples), together with the compositions of the window glasses found at Sion (Wolf et al. 2005), have been plotted in the binary diagrams (A) Pb–Sb, and (B) Cu–Co (excluding light orange samples showing Cu mean contents of 116 064 ppm). out. The Cu–Co binary diagram in Figure 5 (B) is particularly significant for the dark blue slabs. Their composition is comparable to that of blue, dark blue and blue/red glass from Sion (Wolf et al. 2005), for which it has been hypothesized that slabs were recycled as colourants for glassmaking. Cu contents increase from the blackish and dark blue, to light/dark green, and light blue slabs. Blackish slabs Based on MnO contents >1 wt%, a manganese-rich mineral such as pyrolusite was deliberately added to the batch (Jackson 1996, 2005). Slightly elevated Ba contents may be due to different network formers and modifiers or suggest the use of hollandite (Ba(Mn4+,Mn2+)8O16), romanechite ((Ba,H2O)2(Mn4+,Mn3+)5O10) or other hydrated Ba–Mn phases (e.g., (Ba,H2O)2Mn5O10), sometimes named ‘psilomelane’ using a general name). XANES results indicate a unique case in the sample set studied here, a well-established coexistence of Fe2+ and Fe3+, while Mn is in the 2+ state. Moreover, the number of Mn–Mn bonds is particularly high, suggesting the presence of Mn-rich regions corresponding to a non fully achieved dispersion of Mn in the glassy matrix. On the basis of literature data (see above), it is reasonable to suppose that the amount of Mn4+ introduced into the glass batch was insufficient for decolorizing (the MnO/Fe2O3 ratio is about 1) but enough to partially oxidize Fe2+ to Fe3+, itself being reduced to Mn2+; the same process could have involved a partial (?) oxidation of S2– to SO2, causing the removal of the amber colour given by the complex Fe3+–S2-. A combination of Fe2+/Fe3+ and ferric iron sulphide complex could be responsible for the dark olive green tint as shown by the fresh-cut surface of these slabs. The presence of Fe2+ and Mn2+ should imply a reducing atmosphere during firing, necessary for the development of the dark olive green colour. A further possibility should imply that Mn4+ is present below the detection limit of XAS analyses (10%), darkening the glass colour. Nevertheless, it is worth taking these reconstructions with caution; in fact, the technology of these slabs is not straightforward to decipher and the oxidation state of sulphur has not been determined. The external layer shows a well-known weathering pattern of archaeological glasses due to the leaching of alkaline ions, which involve the ‘release’ of Fe2+ and Mn2+ ions, subsequently hydrated and oxidized (Pollard and Heron 1996; Freestone 2001; Watkinson et al. 2005). Finally, it is worth discussing deliberate/accidental addition of chromophores into the glass batch. Based on the wt% of each component, the introduction of Sb compounds must be considered deliberate in all coloured slabs except blackish ones. As for Cu, Pb, Co, Zn and Sn, the reference literature states that contents of 0–100 ppm may be ascribed to primary materials, of 100–1000 ppm to recycling, and of over 1000 ppm to their deliberate addition (Jackson 1996; Wedepohl and Baumann 2000; Freestone et al. 2002). Taking into account these ranges, Cu was added to the light/dark green and light blue slabs in order to colour them, whereas its presence in the dark blue and blackish slabs testifies to recycling; except in the blackish slabs, Pb was always added to improve colour and facilitate manufacture (Cable and Smedley 1987); Co may derive from the recycled blue glass/frit introduced as a colouring agent; Sn derives from primary materials in the dark blue and blackish slabs, and from bronze recycling in the light/dark green and light blue ones; Zn derives from primary materials in the dark green, dark blue and blackish slabs, and from recycling in the light green and light blue ones. However, the following consid- erations suggest that these schematic groupings should be made with caution: scrap Cu-based metals were normally recycled for use as colorants; Co blue glass results from the deliberate addition of specific colorants; and 1313 ppm of Zn falls in the range of deliberate addiction, whereas it should be considered an accidental contaminant introduced by the recycling of quaternary metal alloys. CONCLUSIONS The results achieved add significantly to the characterization of materials employed in the sectilia found at Faragola, leading to the development of a global interpretation. It is now possible to elaborate both an articulated discussion on production/provenance and a reliable reconstruction of production technology. Provenance In the first part of this study (Santagostino Barbone et al. 2008) we concluded that the Faragola sectilia could have equally been produced in Egypt, as suggested by the archaeological studies, or in other Mediterranean locations, as indicated by the analyses. Hence, data obtained suggested leaving open all possible reconstructions. At present, the integration of all collected results allows us to provide a more articulated framework and to indicate some hypotheses as more reliable than others. Schematically organized, all possible working hypotheses are presented below, listing arguments both in favour and against each of them. The first hypothesis envisions the local production of slabs and tesserae, from the provisioning of raw materials to the creation of panels and decorative wall elements. There are currently no arguments in favour of this hypothesis, but many against it. First, the inappropriateness of locally available Ca-rich sands (personal data). Second, primary production would have entailed the import of fluxes and—most likely—of more suitable vitrifying agents, and the presence of specialized workers of probable foreign extraction at the service of customers involved in making decorative, iconographic and stylistic choices. We cannot exclude the hypothesis that the vitreous sectilia pattern was suggested by ‘models’, as documented for the mosaic floors; in any case, the migration of workers and the diffusion of models are problematic, controversial issues (see Cantino Wataghin 1990). Moreover, the only evidence of Roman age primary workshops has been found in the Syria–Palestine area and Egypt alone (Foy 1998; Nenna 2000; Picon and Vichy 2003), and not in Italy or in the West in general. The second hypothesis regards the import of ‘prefabricated’ panels, i.e., finished products. Arguments in favour of this hypothesis The transport of such artefacts over long distances is archaeologically documented by the finding of some 120 panels at Kenchreai (Corinth) dated to about ad 370 and for which an Egyptian provenance has been hypothesized (Alexandria?; see Ibrahim et al. 1976). Although it is therefore possible that the Faragola sectilia were produced in Egypt, archaeometric analyses showed that their composition is more similar to that of artefacts made from Belus River sands than to that of reference Egyptian groups (‘Egyptian I’, ‘Egyptian II’ and Kenchreai panels). The advantage of importing ‘prefabricated’ panels was that it required neither the local construction of primary and/or secondary workshops nor, therefore, large investments for the construction of furnaces, crucibles, tools, utensils etc., nor the presence of craftsmen highly specialized in the design and creation of sectilia. Arguments against this hypothesis In Faragola, the vitreous sectilia (slabs and tesserae) deco- rated not only the floors but also the stibadium and the walls: the base of a quadrangular structure still shows a portion of wall decoration with small vitreous slabs. The panels found in the collapsed layers near the residential staircase were probably meant to decorate the walls; in one case the extremities, though fragmented, document an original design not found in Corinth. This is further proof of the complexity of this decoration, which was obviously not only used in the investigated room. In light of this evidence, it seems unlikely that the decorative framework of the cenatio and other rooms was entirely prefabricated elsewhere, notwithstanding the numerous problems that could arise from the design phase to the completion of works. The third hypothesis is that semi-finished products were imported, refined somewhere and then assembled in situ. Arguments in favour of this hypothesis The start of semi-finished products is documented in the eastern Mediterranean basin from the Bronze Age through to the medieval period, not only by the finding of numerous shipwrecks (see, e.g., Sanderson and Hunter 1981; Foy and Nenna 2001 and references therein; Gratuze and Moretti 2003) but also by the recovery of slabs, ingots and blocks of unworked glass in Rome and numerous other localities (see, e.g., Bacchelli et al. 1995; Mirti et al. 2000, 2001). We can assume that workshops were established in Italy, for instance in Campania, Apulia or Rome, where the demand for these handmade objects was great. In Late Antiquity, these workshops reached unequalled artistry in the construction of sectilia floors and parietes crustatae (see Guidobaldi 1999). There is significant evidence of vitreous sectilia throughout the Imperial Age, including the tens of thousands of glass pieces in Gorga’s collection dating from the first to the middle Imperial age, and the hundreds of blocks ready for fusion and of slag from the productive plants probably located in Rome and its suburbs (see Bacchelli et al. 1995; Amrein 1999). The establishment of secondary workshops did not require significant economic investments. Local refinement would justify the use of breccia (if the local provenance were confirmed) and of other local materials, the unity of the decorative project of the cenatio, and the contemporaneous creation and execution of the project. Likewise, the hundreds of mosaic tesserae used for wall decorations may have been produced locally. During site restoration, a large slab of ‘cipollino’ marble, where geometric designs were traced on the surface (i.e., guidelines for the cutter), has been found. The discovery of this slab, paving the cenatio, and of stone elements roughly hewn in the shape of peltae, listels and capitals, exactly like those of the panels, seems to support the hypothesis of a local workshop treating imported raw materials and semi-finished products. It is worth noting that hypothesizing a contemporary manufacturing of the panel to the renovation of the cenatio should imply a later chronology of the panels themselves from the late fourth century ad to the fifth century ad. There are no arguments against this hypothesis. The current absence of archaeological evidence for workshops does not play in favour of any one particular hypothesis. Moreover, it is worth remembering that the compositional heteroge- neity of slabs from Faragola could indicate that the local/non-local secondary workshop received glass from more than one primary source. In conclusion, current knowledge does not allow us to attribute the panels to specific Eastern, local or Western workshops, although only the identification of a primary atelier in Faragola would support the hypothesis of a local production. Technology A schematic summary of the results obtained for the whole examined context—including data provided in Santagostino Barbone et al. (2008)—is provided in Table 9. All slabs were made by mixing siliceous sand. Natron was the principal fluxing agent, with the exception of red, light orange, dark orange and blackish slabs, which seem to testify the prevalent use of plant ash and/or uneven mixing of both fluxes. Data provided no certain indications on the introduction of stabilizers, however the CaO wt% suggests a voluntary addition in red slabs only. Copper and Sb Light green Dark green Light blue Dark blue Blackish Red Light orange Dark orange Yellow Vitrifying Siliceous sand Siliceous sand Siliceous sand Siliceous sand Siliceous sand Siliceous sand Siliceous sand Siliceous sand Siliceous sand SiO2 (wt%) 67.41 68.59 68.26 68.62 66.51 61.81 54.34 45.17 67.00 Al2O3 (wt%) 2.58 2.44 2.33 2.30 2.79 2.09 2.25 1.49 2.54 Fluxes Natron Natron Natron Natron Mainly natron Plant ash Mainly natron Mainly natron Natron Na2O (wt%) 17.51 17.87 18.45 17.05 15.33 13.43 11.95 7.88 15.04 K2O (wt%) 0.67 0.55 0.65 0.56 1.29 2.85 1.70 2.13 0.56 Stabilizers CaO (wt%) 6.38 6.30 6.20 6.89 8.10 11.62 8.65 7.05 6.70 MgO (wt%) 0.64 0.56 0.61 0.68 1.30 2.78 1.52 1.84 0.58 Colorants and Cu2+ Cu2+ Cu2+ Co2+ Fe2+/Fe3+ + Metallic Cu Cu2O Cu2O Pb antimon. opacifiers Pb antimonates Pb antimonates Ca antimonates Ca antimonates Fe3+S2- + Mn? (Pb antimon.) (Pb antimon.) (Pb antimon.) Fe2O3 (wt%) 0.66 0.56 0.60 0.80 1.08 1.24 2.80 1.68 0.77 MnO (wt%) 0.28 0.30 0.20 0.55 1.11 0.33 0.23 0.28 0.20 Cu (ppm) 12 758 10 449 21 733 792 267 17 395 124 370 87 879 143 Pb (ppm) 17 468 6953 2443 1305 445 2279 20 175 19 211 41 296 Sb (ppm) 5159 4270 5815 8867 767 1260 10 158 2255 5184 Co (ppm) 8 5 5 273 16 10 30 – 3 Sn (ppm) 862 526 850 75 29 >1000 >1000 – 389 Zn (ppm) 155 70 259 83 62 143 1313 – 201 The sectilia panels of Faragola (Ascoli Satriano, southern Italy) 411 compounds are the principal colouring and opacifying agents: red slabs are coloured by metallic copper, orange slabs by cuprite, light and dark green and light blue slabs likely by Cu2+. Antimonate compounds were used to opacify the light and dark green, light and dark blue slabs. Yellow Pb antimonates are the sole colouring–opacifying agents in yellow slabs; furthermore, it seems likely that they were mixed with Cu2+ (blue) to produce the green colour of all green slabs. The effect of iron ions in green slabs must also be taken into account. The white Ca antimonates, instead, lightened the light blue and dark blue colour of mixtures containing Cu2+ and Co2+, respectively. In the case of the blackish slabs, we consider it to be possible that the abundance ratio of Fe2+/Fe3+ and the complex Fe3+–S2- would have an effect on the resulting colour. The contribution of Mn cannot be ascertained even if it could have played a role in darkening glass colour. REFERENCES Amrein, H., 1999, Gli scarti di lavorazione, in La Collezione Gorga (ed. M. Barbera), 218–21, Electa, Milano. Andreescu-Treadgold, I., and Henderson, J., 2006, Glass from the mosaics on the West Wall of Torcello’s Basilica, Arte Medievale, 2, 87–142. Ankudinov, A., Ravel, B., Rehr, J., and Conradson, S., 1998, Real-space multiple-scattering calculation and interpretation of X-ray-absorption near-edge structure, Physical Review, B58, 7565–76. Arletti, R., Vezzalini, G., Biaggio, S., and Maselli Scotti, F., 2008, Archaeometrical studies of Roman Imperial Age glass from Canton Ticino, Archaeometry, 50, 606–26. Arletti, R., Dalconi, M. C., Quartieri, S., Triscari, M., and Vezzalini, G., 2006, Roman coloured and opaque glass: a chemical and spectroscopic study, Applied Physics A, 83, 239–45. Bacchelli, B., Barbera, M., Pasqualucci, R., and Saguì, L., 1995, Nuove scoperte sulla provenienza dei pannelli in opus sectile vitreo della Collezione Gorga, in Atti del II Colloquio dell’Associazione Italiana per lo Studio e la Conser- vazione del Mosaico (AISCOM), Roma, 5–7 December (eds. I. Brigantini and F. Guidobaldi), 447–66, Istituto di Studi Liguri, Bordighera. Bearden, J. A., and Burr, A. F., 1967, Reevaluation of X-ray atomic energy levels, Reviews of Modern Physics, 39, 125–42. Beerkens, R. G. C., and Kahl, K., 2002, Chemistry of sulphur in soda-lime-silica glass melts, Physics and Chemistry of Glasses, 43, 189–98. Borisenko, A. S., Borovikov, A. A., and Pavlova, G. G., 2001, Ore-forming hydrothermal systems of Sn-Ag ore Provinces, in Book of abstracts of the XVI ECROFI European Current Research on Fluid Inclusions, Porto, 2–6 May 2001. Brill, R. H., 1976, Scientific studies of the panel materials, in The panels of opus sectile in glass. Kenchreai eastern port of Corinth, II (eds. L. Ibrahim, R. Scranton and R. Brill), 227–55, Results of investigations by the University of Chicago and Indiana University for the American School of Classical Studies at Athens, Leiden. Brill, R. H., 1988, Scientific investigations, in Excavations at Jalame: site of a glass factory in late Roman Palestine (ed. G. D. Weinberg), 257–94, University of Missouri, Columbia. Cable, M., and Smedley, J. W., 1987, The replication of an opaque red glass from Nimrud. Early vitreous materials (eds. M. Bimson and I. Freestone), British Museum Occasional Paper, 56, 151–64. Cantino Wataghin, G., 1990, Alto Adriatico e Mediterraneo nella produzione musiva della ‘Venetia et Histria’, in Aquileia e l’arco adriatico, Antichità Altoadriatiche, 36, 269–98., Applied Physics Letters, 71, 2611–13. D’Acapito, F., Mobilio, S., Regnard, J. R., Cattaruzza, E., Gonella, F., and Mazzoldi, P., 1998a, The local atomic order and the valence state of Cu in Cu-implanted soda-lime glasses, Journal of Non-Crystalline Solids, 232–4, 364–9. D’Acapito, F., Colonna, S., Pascarelli, S., Antonioli, G., Balerna, A., Bazzini, A., Boscherini, F., Campolungo, F., Chini, G., Dalba, G., Davoli, G. I., Fornasini, P., Graziola, R., Licheri, G., Meneghini, C., Rocca, F., Sangiorgio, L., Sciarra, V., Tullio, V., and Mobilio, S., 1998b, GILDA (Italian beamline) on BM8, ESRF Newsletters, 30, 42–4. Douglas, R. W., and Zaman, M. S., 1969, The chromophore in iron–sulphur amber glasses, Physics and Chemistry of Glasses, 10, 125–32. Foy, D., 1998, L’accès aux matières premières du verre de l’antiquité au moyen âge en Méditerranée occidentale, in Artisanat et matériaux. La place des matériaux dans l’histoire des techiniques (eds. M. C. Amouretti and G. Comet), 101–25, Université de Provence, Aix-en-Provence. Foy, D., and Nenna, M.-D. (eds.), 2001, Tout feu tout sable. Mille ans de verre antique dans le Midi de la France, Edition Edisud, Musées de Marseille.,), 41–85, Editions Monique Mergoil, Montagnac. Freestone, I. C., 2001, Post-depositional changes in archaeological ceramics and glasses, in Handbook of archaeological science (eds. D. R. Brothwell and A. M. Pollard), 615–25, John Wiley & Sons, New York. Freestone, I. C., Bimson, M., and Buckton, D., 1990, Compositional categories of Byzantine glass tesserae, in Annales du 11e Congrès de l’Association Internationale pour l’Histoire du Verre, Bâle, 29 August–3 September 1988, 271–81, AIHV, Amsterdam, The Netherlands. Freestone, I. C., Gorin-Rosen, Y., and Hughes, M. J., 2000, Primary glass from Israel and the production of glass in late Antiquity and the early Islamic period, in La route du verre. Ateliers primaires et secondaires du second millénaire av. J.-C. au Moyen Âge (ed. M.-D. Nenna), 65–83, Travaux de la Maison de l’Orient Méditerranéen, 33, Lyon. Freestone, I. C., Ponting, M., and Hughes, M. J., 2002, Origins of Byzantine glass from Maroni Petrera, Cyprus, Archaeometry, 44, 257–72. Galoisy, L., Calas, G., and Arrio, M. A., 2001, High-resolution XANES spectra of iron in minerals and glasses: structural information from the pre-edge region, Chemical Geology, 174, 307–19. Gonella, F., Caccavale, F., Bogomolova, L. D., d’Acapito, F., and Quaranta, A., 1998, Experimental study of copper– alkali ion exchange in glass, Journal of Applied Physics, 83, 1200–6. Gonella, F., Quaranta, A., Padovani, S., Sada, C., d’Acapito, F., Maurizio, C., Battaglin, G., and Cattaruzza, E., 2005, Copper diffusion in ion-exchanged soda-lime glass, Applied Physics A, 81, 1065–71. Gratuze, B., and Moretti, C., 2003, Lingotti e rottami di vetro destinati alla rifusione rinvenuti nelle navi naufragate in Mediterraneo (III sec. a.C.–III sec. d.C.): analisi chimica dei reperti e recenti ipotesi sull’organizzazione produttiva in vetrerie primarie e secondarie, in Il vetro in Italia meridionale ed insulare, Atti del Secondo Convegno Multidis- ciplinare, Napoli, 5–7 December 2001 (eds. C. Piccioli and F. Sogliani), 401–13, Interservuce Editore, Napoli. Gratuze, B., Soulier, I., Blet, M., and Vallauri, L., 1996, De l’origine du cobalt: du verre a la ceramique, Revue d’Archéométrie, 20, 77–94. Guidobaldi, F., 1999, Le domus tardoantiche di Roma come ‘sensori’ delle trasformazioni culturali e sociali, in The transformations of Urbs Roma in Late Antiquity (ed. W. V. Harris), Journal of Roman Archaeology Supplementum, 33, 52–68. Henderson, J., 1985, The raw materials of early glass production, Oxford Journal of Archaeology, 4, 267–91. Henderson, J., 2003, Localised production or trade? Advances in the study of cobalt blue and Islamic glasses in the Levant and Europe, in Patterns and process: a Festschrift in honor of Dr. Edward V. Sayre (ed. L. van Zelst), 227–45, Smithsonian Institution Press, Washington, DC. Henderson, J., McLoughlin, S. D., and McPhal, D. S., 2004, Radical changes in Islamic glass technology: evidence for conservatism and experimentation with new glass recipes from early and middle Islamic Raqqa, Syria, Archaeometry, 46, 439–68. Ibrahim, L., Scranton, R., and Brill, R. (eds.), 1976, The panels of opus sectile in glass. Kenchreai eastern port of Corinth, II, Results of investigations by the University of Chicago and Indiana University for the American School of Classical Studies at Athens, Leiden. Jackson, C. M., 1996, From Roman to early medieval glasses: many happy returns or a new birth, in Annales du 13e Congrès de l’Association Internationale pour l’Histoire du Verre, Pays Bas, 28 August–1 September 1995, 289– 301, AIHV, Amsterdam, The Netherlands. Jackson, C. M., 2005, Making colourless glass in the Roman period, Archaeometry, 47, 763–80. Kaczmarczyk, A., 1986, The source of cobalt in ancient Egyptian pigments, 1986, in Proceedings of the 24th Interna- tional Archaeometry Symposium (eds. J. S. Olin and M. J. Blackman), 369–76, Smithsonian Institution, Washington, DC. Kuroda, Y., Kumashiro, R., Itadani, A., Nagao, M., and Kobayashi, H., 2001, A more efficient copper-ion-exchanged ZSM-5 zeolite for N-2 adsorption at room temperature: ion-exchange in an aqueous solution of Cu(CH3COO)2, Physical Chemistry Chemical Physics, 3, 1383–90. Lee, P. A., Citrin, P. H., Eisenberger, P., and Kincaid, B. M., 1981, Extended X-ray absorption fine structure—its strengths and limitations as a structural tool, Reviews of Modern Physics, 53, 769–806. Mass, J. L., Stone, R. E., and Wypyski, M. T., 1998, The mineralogical and metallurgical origins of roman opaque colored glasses, in The prehistory and history of glassmaking technology (eds. P. McCray and W. Kingery), 121–44, Ceramics and Civilisation VIII, The American Ceramics Society, Westerville, OH. Maurizio, C., d’Acapito, F., Benfatto, M., Mobilio, S., Cattaruzza, E., and Gonella, F., 2000, Local coordination geometry around Cu+ and Cu2+ ions in silicate glasses: an X-ray absorption near edge structure investigation, European Physical Journal, B14, 211–16. Mirti, P., Davit, P., and Gulmini, M., 2002, Colourants and opacifiers in seventh and eighth century glass investigated by spectroscopic techniques, Analytical and Bioanalytical Chemistry, 372, 221–9. Mirti, P., Lepora, A., and Saguì, L., 2000, Scientific analysis of seventh-century glass fragments from the Crypta Balbi in Rome, Archaeometry, 42, 359–74. Mirti, P., Davit, P., Gulmini, M., and Saguì, L., 2001, Glass fragments from Crypta Balbi in Rome: the composition of eighth-century fragments, Archaeometry, 43, 491–502. Nakai, I., Numako, C., Hosono, H., and Yamasaki, K., 1999, Origin of the red color of satsuma copper-ruby glass as determined by EXAFS and optical absorption spectroscopy, Journal of the American Ceramic Society, 82, 689–784. Nenna, M.-D. (ed.), 2000, La route du verre. Ateliers primaires et secondaires du second millénaire av. J.-C. au Moyen Âge, Travaux de la Maison de l’Orient Méditerranéen, 33, Lyon. Noll, W., 1981, Mineralogy and technology of the painted ceramics of ancient Egypt, in Scientific studies in ancient ceramics (ed. M. Hughes), British Museum Occasional Paper, 19, 143–54. Padovani, S., Puzzovio, D., Sada, C., Mazzoldi, P., Borgia, I., Cartechini, L., Sgamellotti, A., Brunetti, B.G., Shokoui F., Oliaiy, P., Rahighi, J., Lamehi-Rachti, M., D’Acapito, F., Maurizio, C., and Pantos, E., 2006, XAFS study of copper and silver nanoparticles in glazes of medieval middle-east lustreware (10th–13th century), Applied Physics A, 83, 521–8. Pascarelli, S., Boscherini, F., D’Acapito, F., Hardy, J., Meneghini, C., and Mobilio, S., 1996, X-ray optics of a dynamical sagittal focusing monochromator on the GILDA beamline at the ESRF, Journal of Synchrotron Radiation, 3, 147–55. Picon, M., and Vichy, M., 2003, D’orient en Occident: l’origine du verre à l’époque romaine et durant le haut Moyen Age,), 17–31, Éditions Monique Mergoil, Montagnac. Pollard, A. M., and Heron, C., 1996, Archaeological chemistry, Royal Society of Chemistry, London. Quartieri, S., Riccardi, M. P., Messiga, B., and Boscherini, F., 2005, The ancient glass production of the Medieval Val Gargassa glasshouse: Fe and Mn XANES study, Journal of Non-Crystalline Solids, 351, 3013–22 Ravel, B., and Newville, M., 2005, ATHENA, ARTEMIS, HEPHAESTUS: data analysis for X-ray absorption spectros- copy using IFEFFIT, Journal of Synchrotron Radiation, 12, 537–41. Rehren, Th., 2001, Aspects of the production of cobalt-blue glass in Egypt, Archaeometry, 43, 483–9. Roe, M., Plant, S., Henderson, J., Andreescu-Treadgold, I., and Brown, P. D., 2006, Characterisation of archaeological glass mosaics by electron microscopy and X-ray microanalysis, Journal of Physics (Conference Series), 26, 351–4. Salviulo, G., Silvestri, A., Molin, G., and Bertoncello, R., 2004, An archaeometric study of the bulk and surface weathering characteristics of Early Medieval (5th–7th century) glass from the Po valley, northern Italy, Journal of Archaeological Science, 31, 295–306. Sanderson, D. C. W., and Hunter, J. R., 1981, Major element glass type specification for roman, post-roman and mediaeval glasses, Revue d’Archéométrie, Supplement, 3, 255–64. Santagostino Barbone, A., Gliozzo, E., D’Acapito, F., Turbanti Memmi, I., Turchiano, M., and Volpe, G., 2008, The sectilia panels of Faragola (Ascoli Satriano, southern Italy): a multi-analytical study of the green, marbled, blue and blackish glass slabs, Archaeometry, 50, 451–73. Schreurs, J. W. H., and Brill, R. H., 1984, Iron and sulphur related colours in ancient glasses, Archaeometry, 26, 199–209. Shortland, A. J., 2002, The use and origin of antimonate colorants in early Egyptian glass, Archaeometry, 44, 517–30. Shortland, A. J., Tite, M. S., and Ewart, I., 2006, Ancient exploitation and use of cobalt alums from the western oases of Egypt, Archaeometry, 48, 153–68. Silvestri, A., Molin, G., and Salviulo, G., 2005, Roman and medieval glass from the Italian area: bulk characterization and relationships with production technologies, Archaeometry, 47, 797–816. Tite, M. S., and Shortland, A. J., 2003, Production technology for copper and cobalt-blue vitreous materials from the new Kingdom site of Amarna-a repraissal, Archaeometry, 45, 285–312. Turchiano, M., 2008, I pannelli in opus sectile di Faragola (Ascoli Satriano) tra archeologia e archeometria, in Atti del XIII Colloquio AISCOM, Canosa di Puglia, 21–24 February 2007 (eds. C. Angelelli and F. Rinaldi), 59–70, Scripta Manent Edizioni, Tivoli. Uboldi, M., and Verità, M., 2003, Scientific analyses of glasses from late antique and early medieval archaeological sites in Northern Italy, Journal of Glass Studies, 45, 115–37. Warachim, H., Rzechula, J., and Pielak, A., 1985, Magnesium–cobalt (II)–aluminium spinels for pigments, Ceramics International, 11, 103–6. Watkinson, D., Weber, L., and Anheuser, K., 2005, Staining of archaeological glass from manganese-rich environments, Archaeometry, 47, 69–82. Wedepohl, K. H., and Baumann, A., 2000, The use of marine molluskan shells for Roman glass and local raw glass production in the Eifel area (Western Germany), Naturwissenschaften, 87, 129–32. Wolf, S., Kessler, C. M., Stern, W. B., and Gerber, Y., 2005, The composition and manufacture of early medieval coloured window glass from Sion (Valais, Switzerland)—a roman glass-making tradition or innovative craftsmanship?, Archaeometry, 47, 361–80.
https://pt.scribd.com/document/50264945/Gliozzo-E-Et-Al-Sectilia-Panels-of-Faragolia-2010
CC-MAIN-2019-39
refinedweb
11,805
61.36
Availability Attributes in Swift With every release of iOS, Apple introduces new frameworks and technologies and gets rid of others. These changes are always exciting to users — but they can be a real headache for developers. Availability attributes in Swift help give developers relief from these headaches. Although most users adopt new versions of iOS quite quickly, it is still important to make sure your apps work on previous versions. Apple recommends supporting one system version back, meaning that when iOS 10 is released this fall, you should still support iOS 9.3. But what if you want to add a feature that is only available in the newest version, or you need to use a deprecated API on older versions? That is where Swift availability attributes come in. These attributes make it easy for you to ensure your code works on every system version you support. The best way to understand availability is to get your hands dirty with some code. Let’s dive in! Note: You will need Xcode 8 or above to work through this tutorial. Getting Started Download the starter project and open Persona.xcodeproj. Persona is an app that shows random famous people from history, from ancient emperors to musical artists. Persona is built using the Contacts framework, which was first introduced in iOS 9. This means that Persona only works on iOS 9 or greater. Your task is to add support for iOS 8.4. Talk about #throwbackthursday! :] First, examine the project. In the Project navigator, there are two groups, vCards and Images. For each person in the app, there is a .vcf and .jpg image to match. Open PersonPopulator.swift. PersonPopulator has a class method generateContactInfo(), which chooses a random person and returns the contact and image data. Next, open the Persona group and move to ViewController.swift. Each time the user taps the “Random” button, getNewData() is called and repopulates the data with the new person. Supporting iOS 8.4 with Availability Attributes This app currently supports only iOS 9.3 and above. However, supporting iOS 8.4, the most recent version of iOS 8, would be ideal. Setting the oldest compatible version of iOS is simple. Go to the General pane in Persona’s iOS target settings. Find the Deployment Info section and set Deployment Target to 8.4: After making this change, build the project; it looks like things are broken already. Xcode shows two errors in PersonPopulator: In order to fix this error, you need to restrict generateContactInfo() to certain iOS versions — specifically, iOS 9 and greater. Adding Attributes Open PersonPopulator.swift and add the following attribute right above generateContactInfo(): @available(iOS 9.0, *) This attribute specifies that generateContactInfo() is only available in iOS 9 and greater. Checking the Current Version Now that you’ve made this change, build the project and notice the new error in ViewController.swift. The new error states that generateContactInfo() is only available on iOS 9.0 or newer, which makes sense because you just specified this condition. To fix this error, you need to tell the Swift compiler that this method will only be called in iOS 9 and above. You do this using availability conditions. Open ViewController.swift and replace the contents of getNewData() with the following: if #available(iOS 9.0, *) { print("iOS 9.0 and greater") let (contact, imageData) = PersonPopulator.generateContactInfo() profileImageView.image = UIImage(data: imageData) titleLabel.text = contact.jobTitle nameLabel.text = "\(contact.givenName) \(contact.familyName)" } else { print("iOS 8.4") } #available(iOS 9.0, *) is the availability condition evaluated at compile time to ensure the code that follows can run on this iOS version. The else block is where you must write fallback code to run on older versions. In this case, the else block will execute when the device is running iOS 8.4. Build and run this code on the iPhone simulator running iOS 9.0 or greater. Each time you click “Random”, you’ll see iOS 9.0 and greater printed to the console: Adding Fallback Code The Contacts framework introduced in iOS 9 replaced the older Address Book framework. This means that for iOS 8.4, you need to fall back to the Address Book to handle the contact information. Open PersonPopulator.swift and add the following line to the top of the file: import AddressBook Next, add the following method to PersonPopulator: class func generateRecordInfo() -> (record: ABRecord, imageData: Data) { let randomName = names[Int(arc4random_uniform(UInt32(names.count)))] guard let path = Bundle.main.path(forResource: randomName, ofType: "vcf") else { fatalError() } guard let data = try? Data(contentsOf: URL(fileURLWithPath: path)) as CFData else { fatalError() } let person = ABPersonCreate().takeRetainedValue() let people = ABPersonCreatePeopleInSourceWithVCardRepresentation(person, data).takeRetainedValue() let record = NSArray(array: people)[0] as ABRecord guard let imagePath = Bundle.main.path(forResource: randomName, ofType: "jpg"), let imageData = try? Data(contentsOf: URL(fileURLWithPath: imagePath)) else { fatalError() } return (record, imageData) } This code does the same thing as generateContactInfo(), but using the Address Book instead. As a result, it returns an ABRecord instead of a CNContact. Because the Address Book was deprecated in iOS 9, you need to mark this method as deprecated as well. Add the following attribute directly above generateRecordInfo(): @available(iOS, deprecated:9.0, message:"Use generateContactInfo()") This attribute lets the compiler know that this code is deprecated in iOS 9.0, and provides a message to warn you or another developer if you try to use the method in iOS 9 or greater. Now it’s time to use this method. Open ViewController.swift and add the following import statement to the top of the file: import AddressBook Also, add the following to the else block in getNewData(): print("iOS 8.4") let (record, imageData) = PersonPopulator.generateRecordInfo() let firstName = ABRecordCopyValue(record, kABPersonFirstNameProperty).takeRetainedValue() as! String let lastName = ABRecordCopyValue(record, kABPersonLastNameProperty).takeRetainedValue() as! String profileImageView.image = UIImage(data: imageData) titleLabel.text = ABRecordCopyValue(record, kABPersonJobTitleProperty).takeRetainedValue() as? String nameLabel.text = "\(firstName) \(lastName)" This code gets the random record info and sets the labels and the image the same way you have it in generateContactInfo(). The only difference is instead of accessing a CNContact, you access an ABRecord. Build and run the app on the simulator for iOS 9 or above, and everything will work as it did before. You will also notice that your app prints iOS 9.0 and greater to the console: However, the goal of everything you have done so far is to make Persona work on iOS 8.4. To make sure that this all worked, you need to try it out in the iOS 8.4 Simulator. Go to Xcode/Preferences/Components, and download the iOS 8.4 Simulator. When the simulator is finished downloading, select the iPhone 5 iOS 8.4 Simulator and click Run. Persona runs the exact same way that it used to, but now it’s using the Address Book API. You can verify this in the console which says iOS 8.4, which is from your code in the else block of the availability conditional. Availability for Cross-Platform Development As if availability attributes weren’t cool enough already, what if I told you that they could make it much easier to reuse your code on multiple platforms? Availability attributes let you specify the platforms you want to support along with the versions of those platforms you want to use. To demonstrate this, you’re going to port Persona to macOS. First things first: you have to set up this new macOS target. Select File/New/Target, and under macOS choose Application/Cocoa Application. Set the Product Name to Persona-macOS, make sure Language is set to Swift and Use Storyboards is selected. Click Finish. Just like you added support for iOS 8.4, you need to support older versions of macOS as well. Select the macOS target and change the Deployment Target to 10.10. Next, delete AppDelegate.swift, ViewController.swift, and Main.storyboard in the macOS target. In order to avoid some boilerplate work, download these replacement files and drag them into the project. Note: When adding these files to the Xcode project, make sure the files are added to the Persona-macOS target, not the iOS target. If Xcode asks if you want to set up an Objective C Bridging Header, click Don’t create. So far, this target has the image view and two labels set up similar to the Persona iOS app, with a button to get a new random person. One problem right now is that the images and vCards only belong to the iOS target — which means that your macOS target does not have access to them. This can be fixed easily. In the Project navigator, select all the files in the Images and vCards folder. Open the File Inspector in the Utilities menu, and under Target Membership, check the box next to Persona-macOS: You need to repeat this step again for PersonPopulator.swift, since your macOS app needs this file as well. Now that you’ve completed the setup, you can start digging into the code. Multi-Platform Attributes Open PersonPopulator.swift. You may notice that the attributes all specify iOS, but there’s nothing about macOS — yet. iOS 9.0 was released alongside with OS X version 10.11, which means that the new Contacts framework was also introduced on OS X in 10.11. In PersonPopulator above generateContactInfo(), replace @available(iOS 9.0, *) with the following: @available(iOS 9.0, OSX 10.11, *) This specifies that generateContactInfo() is first available on OS X 10.11, to match the introduction of the Contacts framework. Note: Because OS X was renamed macOS in macOS 10.12 Sierra, Swift recently added macOS as an alias for OSX. As a result, both OSX and macOS can be used interchangeably. Next, you need to change the availability of generateRecordInfo() so it also works on macOS. In the previous change, you combined iOS and OS X in a single attribute. However, that can only be done in attributes using that shorthand syntax; for any other @available attribute, you need to add multiple attributes for different platforms. Directly after the deprecation attribute, add the following: @available(OSX, deprecated:10.11, message:"Use generateContactInfo()") This is the same thing as the line above it, but specifies for OS X instead of iOS. Switch to the Persona-macOS target scheme, select My Mac as the build device, and build the project. There is one error in generateRecordInfo(), at the following code block: let person = ABPersonCreate().takeRetainedValue() let people = ABPersonCreatePeopleInSourceWithVCardRepresentation(person, data).takeRetainedValue() let record = NSArray(array: people)[0] The Contacts framework is a little different between iOS and macOS, which is why this error popped up. To fix this, you want to execute different code on iOS and macOS. This can be done using a preprocessor command. Replace the previous code with the following: #if os(iOS) let person = ABPersonCreate().takeRetainedValue() let people = ABPersonCreatePeopleInSourceWithVCardRepresentation(person, data).takeRetainedValue() let record = NSArray(array: people)[0] as ABRecord #elseif os(OSX) let person = ABPersonCreateWithVCardRepresentation(data).takeRetainedValue() as AnyObject guard let record = person as? ABRecord else { fatalError() } #else fatalError() #endif This makes the code work the same way on both platforms. Linking Up the UI Now that you finished updating PersonPopulator, setting up ViewController will be a breeze. Open Persona-macOS’s ViewController.swift and add the following line to awakeFromNib(): getNewData(nil) Next, add the following to getNewData(_:): let firstName: String, lastName: String, title: String, profileImage: NSImage if #available(OSX 10.11, *) { let (contact, imageData) = PersonPopulator.generateContactInfo() firstName = contact.givenName lastName = contact.familyName title = contact.jobTitle profileImage = NSImage(data: imageData)! } else { let (record, imageData) = PersonPopulator.generateRecordInfo() firstName = record.value(forProperty: kABFirstNameProperty) as! String lastName = record.value(forProperty: kABLastNameProperty) as! String title = record.value(forProperty: kABTitleProperty) as! String profileImage = NSImage(data: imageData)! } profileImageView.image = profileImage titleField.stringValue = title nameField.stringValue = "\(firstName) \(lastName)" Other than some small differences between iOS and macOS APIs, this code looks very familiar. Now it’s time to test the macOS app. Change the target to Persona-macOS and select My Mac as the build device. Run the app to make sure it works properly. Newton seems impressed! By making just those small changes to PersonPopulator, you were able to easily port your iOS app to another platform. More Info About @available Availability attributes can be a little confusing to format and to use. This section should help clear up any questions you may have about them. These attributes may be placed directly above any declaration in your code, other than a stored variable. This means that all of the following can be preceded by an attribute: - Classes - Structs - Enums - Enum cases - Methods - Functions To indicate the first version of an operating system that a declaration is available, use the following code: @available(iOS, introduced: 9.0) The shorthand, and preferred syntax, for marking the first version available is shown below: @available(iOS 9.0, *) This shorthand syntax allows you to include multiple “introduced” attributes in a single attribute: @available(iOS, introduced: 9.0) @available(OSX, introduced: 10.11) // is replaced by @available(iOS 9.0, OSX 10.11, *) Other attributes specify that a certain declaration no longer works: @available(watchOS, unavailable) @available(watchOS, deprecated: 3.0) @available(watchOS, obsoleted: 3.0) These arguments act in similar ways. unavailable signifies that the declaration is not available on any version of the specified platform, while deprecated and obsoleted mean that the declaration is only relevant on older platforms. These arguments also let you provide a message to show when the wrong declaration is used, as you used before with the following line: @available(OSX, deprecated:10.11, message: "Use generateContactInfo()") You can also combine a renamed argument with an unavailable argument that helps Xcode provide autocomplete support when used incorrectly. @available(iOS, unavailable, renamed: "NewName") Finally, the following is a list of the platforms you can specify availability for: - iOS - OSX - tvOS - watchOS - iOSApplicationExtension - OSXApplicationExtension - tvOSApplicationExtension - watchOSApplicationExtension The platforms that end with ApplicationExtension are extensions like custom keyboards, Notification Center widgets, and document providers. Note: The asterisk in the shorthand syntax tells the compiler that the declaration is available on the minimum deployment target on any other platform. For example, @available(iOS 9.0, *) states that the declaration is available on iOS 9.0 or greater, as well as on the deployment target of any other platform you support in the project. On the other hand, @available(*, unavailable) states that the declaration is unavailable on every platform supported in your project. Where to Go From Here? Here is the final project to compare with your own. Availability Attributes make the task of supporting various platforms and versions in your applications extremely easy. Designed directly into the Swift language, they work with the compiler and with you to streamline the process of adding cross-platform and version compatibility to your project. If you have any questions or comments about how I’ve used Availability Attributes in this tutorial, let me know in the comments below!
https://www.raywenderlich.com/139077/availability-attributes-swift
CC-MAIN-2017-47
refinedweb
2,496
50.63
Trying out some OOP in python, I tried to create a Monty Hall Problem simulation that is giving odd results. I implement three different strategies that a player can choose from, either to stay with the first door selected, switch to the second closed door, or randomly choose between them. import random class Door(): behind = None is_open = False is_chosen = False def __init__(self,name=None): self.name = name def open(self): self.is_open = True def choose(self): self.is_chosen = True class Goat(): is_a = 'goat' class Car(): is_a = 'car' class Player(): door = None def choose(self,door): self.door = door self.door.choose() def open(self): self.door.open() if self.door.behind.is_a == 'car': return True return False def play(strategy): player = Player() items = [Goat(),Goat(),Car()] doors = [Door(name='a'),Door(name='b'),Door(name='c')] for door in doors: item = items.pop() door.behind = item random.shuffle(doors) player.choose(random.choice(doors)) if strategy == 'random': if random.choice([True,False]): for door in doors: if not door.is_open and not door.is_chosen: final = door break else: final = player.door elif strategy == 'switch': for door in doors: if not door.is_open and not door.is_chosen: final = door break elif strategy == 'stay': final = player.door player.choose(final) if player.open(): return True else: return False ## Play some games for strategy in ['random','switch','stay']: results = [] for game in range(0,10000): if play(strategy): results.append(True) else: results.append(False) ## Gather the results wins = 0 loses = 0 for game in results: if game: wins += 1 else: loses += 1 print 'results:\tstrategy={}\twins={}\tloses={}'.format(strategy,str(wins),str(loses)) results: strategy=random wins=3369 loses=6631 results: strategy=switch wins=3369 loses=6631 results: strategy=stay wins=3320 loses=6680 You're not playing the game correctly. After the contestant chooses a door, the host reveals a goat behind one of the other two doors, and then offers the contestant the opportunity to switch -- you're allowing a choice between three doors instead of two. Here's a revised play() function: def play(strategy): player = Player() items = [Goat(), Goat(), Car()] doors = [Door(name='a'), Door(name='b'), Door(name='c')] random.shuffle(items) for door in doors: item = items.pop() door.behind = item player.choose(random.choice(doors)) # player has chosen a door, now show a goat behind one of the other two show = None for door in doors: if not (door.is_open or door.is_chosen) and door.behind.is_a == 'goat': show = door show.open() break # The player has now been shown a goat behind one of the two doors not chosen if strategy == 'random': if random.choice([True, False]): for door in doors: if not (door.is_open or door.is_chosen): final = door break else: final = player.door elif strategy == 'switch': for door in doors: if not (door.is_open or door.is_chosen): final = door break elif strategy == 'stay': final = player.door player.choose(final) return player.open() That produces results like: results: strategy=random wins=4977 loses=5023 results: strategy=switch wins=6592 loses=3408 results: strategy=stay wins=3368 loses=6632
https://codedump.io/share/ExsHzjek5gkc/1/python-oop-monty-hall-not-giving-the-expected-results
CC-MAIN-2017-13
refinedweb
515
52.97
Galry's Story, or the quest of multi-million plots. After my announcement, I was pleased to see that there were a lot of people interested in this project. There were more than 500 unique visits since then, which is not that impressive but still much more than what I'd have thought! That's probably because I wasn't the only one to note that it was simply not possible to plot huge datasets in Python. Matplotlib, probably the most popular plotting library in Python, crashes before displaying a multi-million dataset (at least that's what I could experience on my machine), or when it works, the navigation is severly limited by an extremely low framerate. All other plotting libraries I could find had the same issue. The Matlab plotting library appears to be a bit more efficient than matplotlib when it comes to multi-millions datasets, and it may be one of the reasons why many people still prefer to use Matlab rather than Python. I think many people are doing just fine with matplotlib because they simply don't work with very large datasets. But that may be going to change, with "big data" becoming a more and more popular buzz word. In bioinformatics, the mass of data becoming available is simply crazy. There's the whole field of bioimaging of course, but even apparently harmless time-dependent signals can become quite large. Imagine, for example, a neurophysiological recording with an extracellular multi-electrode array with 250 channels, each channel sampling a signal at 16 bits and 20 kHz (this is close to a real example). That's 10 MB of data per second (5 million points), more than 30 GB per hour (18 billion points) ! A modern hard drive can store that, but processing such a big file is simply not straightforward: it even doesn't fit in system memory (at least on most today's computers), and even less in graphics memory. Yet, is it too much to ask to just plot these data? The typical way of processing this is to take chunks of data, either in space or in time. But when it comes to visualization, it's hardly possible to plot even a single second across all channels, since that's already 5 million points! One could argue that a modern screen does not contain much more than 2 million pixels, and about 2000 only horizontally. But the whole point of interactive navigation (zooming and panning) is to be able to plot the whole signal at first, and zoom-in in real time on regions of interest. I could not find any Python library that would allow me to do that. Outside Python, I am not aware of such a software either. That's precisely why I decided to try a new approach, which is to use the graphics card for the whole rendering process in the most efficient possible way. I realized that the only way I could achieve the highest performance possible on a given piece of hardware was to go as low-level as I could with Python. Using a great and light Python wrapper around OpenGL (not unexpectingly called PyOpenGL) seemed like a natural choice. Initial proof-of-concept experiments with PyOpenGL suggested that it appeared to be like a viable method. That's how Galry was born earlier this year. Here come shaders The library has evolved a lot since then. I had to go through multiple improvements and refactoring sessions as I was using Galry for my research project. In addition, I also had to learn OpenGL in parallel. That was not an excellent idea, since I realized several times that I was doing it wrong. In particular, I was using at first a totally obsolete way of rendering points, which was to use the fixed function pipeline. When I discovered that the modern way of using OpenGL was to use customizable shaders , I had to go through a consequent rewriting of the whole rendering engine. I could have spared me this rewriting if I was aware of that point beforehand. But it was in the end a very good decision, since programmable shaders are just infinitely more powerful than the fixed function pipeline, and make a whole new bunch of things possible with the package. Not only was I able to considerably improve the rendering part in my research project, but I realized that the same code could be used to do much more than just plotting. Here are a few examples of what I was able to do with the new "shader-aware" interface: GPU-based image filtering, GPU-based particle system, GPU-based fractal viewer, 3D rendering, dynamic graph rendering (CPU-based for now), etc. These are all actual working examples in the Galry repository. I suppose this package could also be used to write small video games! The following video shows a demo of the graph example. This example incorporates many of the rendering techniques currently implemented in Galry: point sprites (a single texture attached to many points), lines, buffer references (the nodes and edges are rendered using the exact same memory buffer on the GPU, which contains the node positions), indexed rendering (the edges are rendered using indices targetting the corresponding nodes, always stored in the same buffer), real-time buffer updating (the positions are updated on the CPU and transferred on the GPU at every frame). GPU-based rendering may also be possible but it's not straightforward, since the shaders need to access the other shaders' information, and also modify dynamically the position. I might investigate this some time. Another solution is to use OpenCL, but it requires to have an OpenCL installation (it can work even if an OpenCL-capable GPU is not available, in which case the OpenCL kernels are executed on the CPU). About OpenGL Another thing I discovered a bit late was that OpenGL is a fragmented library: there are multiple versions, a lot of different extensions, some being specific to a hardware vendor, and a lot of deprecated features. There's also a specific version of OpenGL for mobile platforms (such as the IPhone and the IPad), called OpenGL ES, which is based on OpenGL but which is still different. In particular, a lot of deprecated features in OpenGL are simply unavailable in OpenGL ES. Also, the shading language (GLSL) is not the same between OpenGL and OpenGL ES. There's a loose correspondence between the two but the version numbers do not match at all. And, by the way, the GLSL language version does not match the corresponding OpenGL version... except for later versions! Really confusing. The OpenGL ES story is important for Galry, because apparently OpenGL ES is sometimes used in VirtualBox for hardware-acceleration, and it might also be useful in the future for a potential mobile version of Galry. In addition, OpenGL ES also forms the basis of WebGL, enabling access to OpenGL in the browser. I'll talk about that below, but the point is that in order to have compatibility between multiple versions of OpenGL, I had to redesign again an important part of the rendering process (by using a small template system for dynamic shader code generation depending on the GLSL version). Also, whereas the shading language is quite nice and easy to use, I find the host OpenGL API unintuitive and sometimes obscure. The Galry programming interface is right there to hide those details to the developer. In brief, I find certain aspects of OpenGL a bit messy, but the advantages and the power of the library are definitely worth it. About writing multi-platform software Python is great for multi-platform software. Choosing Python for a new project means that one has the best chance of having a single code base for all operating systems out there. In theory, that's the same story for OpenGL, since it's a widely used open standard. In practice, it's much more difficult due to the fragmentation of the OpenGL versions and drivers across different systems and graphics card manufacturers. Writing a multi-platform system means that all supported systems need to be tested, and that's not particularly easy to do in practice: there are a large number of combinations of systems (Windows, different Linux distributions, Mac OSX, either 32 bits and 64 bits), of graphics card drivers, versions of Python/PyOpenGL/Qt, etc. High-level interface In the current experimental version of Galry, the low-level API is the only interface I've been working on, since it's really what I need for my project. However, I do plan to write a basic matplotlib-like high-level interface in the near future. At some point, I even considered integrating Galry's code into a matplotlib GL backend, which is apparently something that several people have been trying to do for quite some time. However, as far as I understand, this is very much non-trivial due to the internal architecture of matplotlib. The backend handles the rendering process and is asked to redraw everything at each frame during interactive navigation. However, high performance is achieved in Galry by loading data at initialization time only, and updating the transformation at every frame so that the GPU can apply it on the data. The backend does not appear to have access to that transformation, so I can't see how an efficient GL renderer could be written in the current architecture. But I'm pretty sure that somebody will manage to make that happen eventually. In the meantime, I will probably write a high-level interface from scratch, without using matplotlib at all. The goal is to replace import matplotlib.pyplot as plt by something like import galry.plot as plt at the top of a script to use Galry instead of matplotlib. At first, I'll probably only implement the most common functions such as plot, imshow, etc. That would already be very useful. Galry in the browser Fernando Perez, the author of IPython, suggested to integrate Galry in the IPython notebook. The notebook is a relatively new feature that allows to write (I)Python code in cells within an HTML page, and output the result below. That's quite similar to what Mathematica or Maple offer. The whole interactive session can then be saved as a JSON file. It brings reproducibility and coherent structure in interactive computing. Headers, text, and even static images with matplotlib can be integrated in a notebook. Blog posts, courses, even technical books are being written with this. I personally heard about the notebook some time ago, but I'd never tried it because I was a bit reluctant to use Python in a browser instead of a console. After Fernando's suggestion, I tried to use the notebook and I quickly understood why so many people are very enthusiastic about it. It's because it changes the very way we do exploratory research with numerical experiments. In a classical workflow, one would use a Python script to write some computational process, and use the interactive console to execute it, explore the model in the parameter space, etc. It works, but it can be terrible for reproducibility: there's no way one can recover the exact set of parameters and code that corresponds to figure test34_old_newnewbis.png. Many people are dealing with this problem, me included. I'm quite ashamed by the file structure of most of my past research projects' code, and I'll try to use the notebook in the future to try being more organized than before. The idea of integrating Galry in the notebook comes from the work that has been done during a Python conference earlier this month, with the integration of a 3D molecular visualization toolkit in the notebook using WebGL. WebGL is a standard specification derived from OpenGL that aims at bringing OpenGL rendering to the browser, through the HTML5 <canvas> element. It is still an ongoing project that may still take months or years to complete. Currently, it is only supported by the latest versions of modern browsers such as Chrome or Firefox (no IE of course). But it's an exciting technology that has a huge potential. So I thought it would be quite a good idea and I gave it a try: I managed to implement a proof-of-concept in only one day, by looking at the code that had been done during the conference. Then, I was able to see what would need to be done in the code's architecture to make that integration smooth and flexible. The solution I chose was to separate completely the scene definition (what to plot exactly, with all parameters, data, colors, shading code, etc.) with the actual GL rendering code. The scene can be serialized in JSON and transmitted to a Javascript/WebGL renderer. I had to rewrite a portion of the Python renderer in Javascript, which turned out to be less painful than what I thought. Finally, the WebGL renderer can in fact be used as a completely standalone Javascript library, without any reference to Python. This may allow interesting scenarios, such as the conversion of a plotting script in Python using a matplotlib-like high-level interface, into standalone HTML/Javascript code that enables interactive visualization of that plot in the browser. About performance The major objective of Galry is, by far, performance. I found that PyOpenGL can be very fast at the important condition of using it correctly. In particular, data transfer from system memory to graphics memory should be made only when necessary. Also, the number of calls to OpenGL commands should be minimal in the rendering phase. The first point means that data should be uploaded on the GPU at initialization time, and should stay on the GPU as long as possible. When zooming in, the GPU should handle the whole transformation on the same memory buffer. This ensures that the GPU is used optimally. In Matplotlib, as far as I know, everything is rendered again at each frame, which explains why the performance is not very good. And the CPU does the rendering in this case, not the GPU. The second point is also crucial. When plotting a large number of individual points, or a single curve, it is possible to call a single OpenGL rendering command, so that the Python overhead is negligible compared to the actual GPU rendering phase. But when it comes to a plot containing a large number of individual curves, using a Python loop is highly inefficient, especially when every call renders a small curve. The best solution I could find is to use glMultiDrawArrays or glMultiDrawElements, which render several primitives with a single memory buffer and a single command. Even if this function is implemented internally as a loop by the driver, it will still be much faster than a Python loop, since there isn't the cost of interpretation. With this technique, I am able to plot 100 curves with 1 million points each at ~15 FPS with a recent graphics card. That's 1.5 billion points per second! Such performance is directly related to the incredible power of modern GPUs, which is literally mind blowing. Yet, I think there's still some room for improvement by using dynamic undersampling techniques. But that is for the future...
http://cyrille.rossant.net/galrys-story-or-the-quest-of-multi-million-plots/
CC-MAIN-2016-07
refinedweb
2,571
59.03
.webservices.internal.api.message; 27 28 //TODO Do we want to remove this implementation dependency? 29 import com.sun.xml.internal.ws.encoding.ContentTypeImpl; 30 31 /** 32 * A Content-Type transport header that will be returned by {@link MessageContext#write(java.io.OutputStream)}. 33 * It will provide the Content-Type header and also take care of SOAP 1.1 SOAPAction header. 34 * 35 * @author Vivek Pandey 36 */ 37 public interface ContentType { 38 39 /** 40 * Gives non-null Content-Type header value. 41 */ 42 public String getContentType(); 43 44 /** 45 * Gives SOAPAction transport header value. It will be non-null only for SOAP 1.1 messages. In other cases 46 * it MUST be null. The SOAPAction transport header should be written out only when its non-null. 47 * 48 * @return It can be null, in that case SOAPAction header should be written. 49 */ 50 public String getSOAPActionHeader(); 51 52 /** 53 * Controls the Accept transport header, if the transport supports it. 54 * Returning null means the transport need not add any new header. 55 * 56 * <p> 57 * We realize that this is not an elegant abstraction, but 58 * this would do for now. If another person comes and asks for 59 * a similar functionality, we'll define a real abstraction. 60 */ 61 public String getAcceptHeader(); 62 63 static public class Builder { 64 private String contentType; 65 private String soapAction; 66 private String accept; 67 private String charset; 68 69 public Builder contentType(String s) {contentType = s; return this; } 70 public Builder soapAction (String s) {soapAction = s; return this; } 71 public Builder accept (String s) {accept = s; return this; } 72 public Builder charset (String s) {charset = s; return this; } 73 public ContentType build() { 74 //TODO Do we want to remove this implementation dependency? 75 return new ContentTypeImpl(contentType, soapAction, accept, charset); 76 } 77 } 78 }
http://checkstyle.sourceforge.net/reports/javadoc/openjdk8/xref/openjdk/jaxws/src/share/jaxws_classes/com/oracle/webservices/internal/api/message/ContentType.html
CC-MAIN-2018-13
refinedweb
301
54.93
This article will take you through just a few of the methods that we have to describe our dataset. Let’s get started by firing up a season-long dataset of referees and their cards given in each game last season. import numpy as np import pandas as pd df = pd.read_csv("../data/Refs.csv") #Use '.head()' to see the top rows and check out the structure df.head() #Let's change those shorthand column titles to something more intuitive df.columns = ['Date','HomeTeam','AwayTeam', 'Referee','HomeFouls','AwayFouls', 'TotalFouls','HomeYellows','AwayYellows', 'TotalYellows', 'HomeReds','AwayReds','TotalReds'] df.head(2) #And do we have a complete set of 380 matches? len() will tell us. len(df) 380 Descriptive statistics The easiest way to produce an en-masse summary of our dataset is with the ‘.describe()’ method. This will give us a whole new table of statistics for each numerical column: - Count – how many values are there? - Mean – what is the mean average? (Sum of values/count of values) - STD – what is the standard deviation? This number describes how widely the group differs around the average. If we have a normal distribution, 68% of our values will be within one STD either side of the average. - Min – the smallest value in our array - 25%/50%/75% – what value accounts for 25%/50%/75% of the data? - Max – the highest value in our array df.describe() So what do we learn? On average, we have between 3 and 4 yellows in a game and that the away team are only slightly more likely to get more cards. Fouls and red cards are also very close between both teams. In 68% of games, we expect between 1.5 and 5.5 yellow cards. At least one game had 38 fouls. That’s roughly one every two and a half minutes! Describing with groups Our describe table above is great for a broad brushstroke, but it would be helpful to look at our referees individually. Let’s use .groupby() to create a dataset grouped by the ‘Referee’ column groupedRefs = df.groupby("Referee") We can now apply some operations to check out our data by referee: #All averavges groupedRefs.describe() 20 rows × 72 columns There is plenty going on here, so while you may want to look through everything yourself, you can also select particular columns: #Let's analyse yellow cards groupedRefs.describe()['TotalYellows'] So Mason gives the highest on average, but only officiates one game. Of our most utilised officials, Friend is the most likely to have given a booking, with 4.7 per game. Pawson, however, had the busiest game, with 11 yellows in a single match. If you’re interested in which game it was, check below. df[df['TotalYellows']==11] Who would have thought Burnley v Middlesbrough would’ve been a dirty game?! No festive spirit in this Boxing Day scrap, either way. This game wasn’t even the one with the most fouls… df[df['TotalFouls']==38] 38 fouls, 3 yellows and a red between Mazzarri’s Watford and Stoke. Probably not a classic match! (It finished 1-0 to Stoke if you’re curious). Summary In this section, we have seen how using the ‘.describe()’ function makes getting summary statistics for a dataset really easy. We were able to get results about our data in general, but then get more detailed insights by using ‘.groupby()’ to group our data by referee. You might want to take a look at our visualisation topics to see how we can put data into charts, or see even more Pandas methods in this section.
http://fcpython.com/data-analysis/describing-datasets
CC-MAIN-2018-51
refinedweb
600
66.94
The following problems appeared as assignments in the edX course Analytics for Computing (by Gatech). The descriptions are taken from the assignments. 1. Association rule mining First we shall implement the basic pairwise association rule mining algorithm. Problem definition Let’s say we have a fragment of text in some language. We wish to know whether there are association rules among the letters that appear in a word. In this problem: - Words are “receipts” - Letters within a word are “items” We want to know whether there are association rules of the form, a⟹ b , where a and b are letters, for a given language (by calculating for each rule its confidence, conf (a ⟹ b), which is an estimate of the conditional probability of b given a, or Pr[b|a]. Sample text input Let’s carry out this analysis on a “dummy” text fragment, which graphic designers refer to as the. """ Data cleaning Like most data in the real world, this dataset is noisy. It has both uppercase and lowercase letters, words have repeated letters, and there are all sorts of non-alphabetic characters. For our analysis, we should keep all the letters and spaces (so we can identify distinct words), but we should ignore case and ignore repetition within a word. For example, the eighth word of this text is “error.” As an itemset, it consists of the three unique letters, {e,o,r}. That is, to treat the word as a set, meaning we only keep the unique letters. This itemset has six possible itempairs: {e,o}, {e,r}, and {o,r}. - We need to start by “cleaning up” (normalizing) the input, with all characters converted to lowercase and all non-alphabetic, non-space characters removed. - Next, we need to convert each word into an itemset like the following examples: consequatur –> {‘a’, ‘e’, ‘c’, ‘s’, ‘n’, ‘o’, ‘u’, ‘t’, ‘q’, ‘r’} voluptatem –> {‘l’, ‘a’, ‘e’, ‘o’, ‘p’, ‘u’, ‘m’, ‘t’, ‘v’} Implementing the basic algorithm The followed algorithm is implemented: First all item-pairs within an itemset are enumerated and a table that tracks the counts of those item-pairs is updated in-place. - Now, given tables of item-paircounts and individual item counts, as well as a confidence threshold, the rules that meet the threshold are returned. - The returned rules should be in the form of a dictionary whose key is the tuple, (a,b) corresponding to the rule a⇒ b, and whose value is the confidence of the rule, conf(a ⇒ b). - The following functions were implemented to compute the association rules. from collections import defaultdict from itertools import combinations def update_pair_counts (pair_counts, itemset): """ Updates a dictionary of pair counts for all pairs of items in a given itemset. """ assert type (pair_counts) is defaultdict for item in list(combinations(itemset, 2)): pair_counts[item] += 1 pair_counts[item[::-1]] += 1 return pair_counts def update_item_counts(item_counts, itemset): for item in itemset: item_counts[item] += 1 return item_counts def filter_rules_by_conf (pair_counts, item_counts, threshold): rules = {} # (item_a, item_b) -> conf (item_a => item_b) for (item_a, item_b) in pair_counts: conf = pair_counts[(item_a, item_b)] / float(item_counts[item_a]) if conf >= threshold: rules[(item_a, item_b)] = conf return rules def find_assoc_rules(receipts, threshold): pair_counts = defaultdict(int) item_counts = defaultdict(int) for receipt in receipts: update_pair_counts(pair_counts, receipt) update_item_counts(item_counts, receipt) return filter_rules_by_conf(pair_counts, item_counts, threshold) - For the Latin string, latin_text, the function find_assoc_rules()was used to compute the rules whose confidence is at least 0.75, with the following rules obtained as result. conf(q => u) = 1.000 conf(x => e) = 1.000 conf(x => i) = 0.833 conf(h => i) = 0.833 conf(v => t) = 0.818 conf(r => e) = 0.800 conf(v => e) = 0.773 conf(f => i) = 0.750 conf(b => i) = 0.750 conf(g => i) = 0.750 - Now, let’s look at rules common to the above Latin textandEnglish text obtained by a translation of the lorem ipsum text, as shown below: english_text = """. """ - Again, for the English string, english_text, the function find_assoc_rules()was used to compute the rules whose confidence is at least 0.75, with the following rules obtained as result. conf(z => a) = 1.000 conf(j => e) = 1.000 conf(z => o) = 1.000 conf(x => e) = 1.000 conf(q => e) = 1.000 conf(q => u) = 1.000 conf(z => m) = 1.000 conf(z => r) = 1.000 conf(z => l) = 1.000 conf(z => e) = 1.000 conf(z => d) = 1.000 conf(z => i) = 1.000 conf(k => e) = 0.778 conf(q => n) = 0.750 - Let’s consider any rules with a confidence of at least 0.75 to be a “high-confidence rule“. The common_high_conf_rulesare all the high-confidence rules appearing in both the Latin text and the English text. The rules shown below are all such rules:High-confidence rules common to _lorem ipsum_ in Latin and English: q => u x => e - The following table and the figure show the high confidence rules for the latin and the english texts. Putting it all together: Actual baskets! Let’s take a look at some real data from this link. First few lines of the transaction data is shown below: citrus fruit,semi-finished bread,margarine,ready soups tropical fruit,yogurt,coffee whole milk pip fruit,yogurt,cream cheese ,meat spreads other vegetables,whole milk,condensed milk,long life bakery product whole milk,butter,yogurt,rice,abrasive cleaner rolls/buns other vegetables,UHT-milk,rolls/buns,bottled beer,liquor (appetizer) pot plants whole milk,cereals tropical fruit,other vegetables,white bread,bottled water,chocolate citrus fruit,tropical fruit,whole milk,butter,curd,yogurt,flour,bottled water,dishes beef frankfurter,rolls/buns,soda chicken,tropical fruit butter,sugar,fruit/vegetable juice,newspapers fruit/vegetable juice packaged fruit/vegetables chocolate specialty bar other vegetables butter milk,pastry whole milk tropical fruit,cream cheese ,processed cheese,detergent,newspapers tropical fruit,root vegetables,other vegetables,frozen dessert,rolls/buns,flour,sweet spreads,salty snack,waffles,candy,bathroom cleaner bottled water,canned beer yogurt sausage,rolls/buns,soda,chocolate other vegetables brown bread,soda,fruit/vegetable juice,canned beer,newspapers,shopping bags yogurt,beverages,bottled water,specialty bar - Our task is to mine this dataset for pairwise association rules to produce a final dictionary, basket_rules, that meet these conditions: - The keys are pairs (a,b), where a and b are item names. - The values are the corresponding confidence scores, conf(a ⇒ b). - Only include rules a ⇒ b where item a occurs at least MIN_COUNTtimes and conf(a ⇒ b) is at least THRESHOLD. The result is shown below: Found 19 rules whose confidence exceeds 0.5. Here they are: conf(honey => whole milk) = 0.733 conf(frozen fruits => other vegetables) = 0.667 conf(cereals => whole milk) = 0.643 conf(rice => whole milk) = 0.613 conf(rubbing alcohol => whole milk) = 0.600 conf(cocoa drinks => whole milk) = 0.591 conf(pudding powder => whole milk) = 0.565 conf(jam => whole milk) = 0.547 conf(cream => sausage) = 0.538 conf(cream => other vegetables) = 0.538 conf(baking powder => whole milk) = 0.523 conf(tidbits => rolls/buns) = 0.522 conf(rice => other vegetables) = 0.520 conf(cooking chocolate => whole milk) = 0.520 conf(frozen fruits => whipped/sour cream) = 0.500 conf(specialty cheese => other vegetables) = 0.500 conf(ready soups => rolls/buns) = 0.500 conf(rubbing alcohol => butter) = 0.500 conf(rubbing alcohol => citrus fruit) = 0.500 2. Simple string processing with Regex Phone numbers - Write a function to parse US phone numbers written in the canonical “(404) 555-1212” format, i.e., a three-digit area code enclosed in parentheses followed by a seven-digit local number in three-hyphen-four digit format. - It should also ignore all leading and trailing spaces, as well as any spaces that appear between the area code and local numbers. - However, it should not accept any spaces in the area code (e.g., in ‘(404)’) nor should it in the local number. - It should return a triple of strings, (area_code, first_three, last_four). - If the input is not a valid phone number, it should raise a ValueError. import re def parse_phone(s): pattern = re.compile("\s*\((\d{3})\)\s*(\d{3})-(\d{4})\s*") m = pattern.match(s) if not m: raise ValueError('not a valid phone number!') return m.groups() #print(parse_phone1('(404) 201-2121')) #print(parse_phone1('404-201-2121')) - Implement an enhanced phone number parser that can handle any of these patterns. - (404) 555-1212 - (404) 5551212 - 404-555-1212 - 404-5551212 - 404555-1212 - 4045551212 - As before, it should not be sensitive to leading or trailing spaces. Also, for the patterns in which the area code is enclosed in parentheses, it should not be sensitive to the number of spaces separating the area code from the remainder of the number. The following function implements the enhanced regex parser. import re def parse_phone2 (s): pattern = re.compile("\s*\((\d{3})\)\s*(\d{3})-?(\d{4})\s*") m = pattern.match(s) if not m: pattern2 = re.compile("\s*(\d{3})-?(\d{3})-?(\d{4})\s*") m = pattern2.match(s) if not m: raise ValueError('not a valid phone number!') return m.groups() 3. Tidy data and the Pandas “Tidying data,” is all about cleaning up tabular data for analysis purposes. Definition: Tidy datasets. More specifically, Wickham defines a tidy data set as one that can be organized into a 2-D table such that - each column represents a variable; - each row represents an observation; - each entry of the table represents a single value, which may come from either categorical (discrete) or continuous spaces. Definition: Tibbles. if a table is tidy, we will call it a tidy table, or tibble, for short. Apply functions to data frames Given the following pandas DataFrame (first few rows are shown in the next table),compute the prevalence, which is the ratio of cases to the population, using the apply() function, without modifying the original DataFrame. The next function does exactly the same thing. def calc_prevalence(G): assert 'cases' in G.columns and 'population' in G.columns H = G.copy() H['prevalence'] = H.apply(lambda row: row['cases'] / row['population'], axis=1) return H Tibbles and Bits Now let’s start creating and manipulating tibbles. Write a function, canonicalize_tibble(X), that, given a tibble X, returns a new copy Y of X in canonical order. We say Y is in canonical order if it has the following properties. - The variables appear in sorted order by name, ascending from left to right. - The rows appear in lexicographically sorted order by variable, ascending from top to bottom. - The row labels ( Y.index) go from 0 to n-1, where nis the number of observations. The following code exactly does the same: def canonicalize_tibble(X): # Enforce Property 1: var_names = sorted(X.columns) Y = X[var_names].copy() Y = Y.sort_values(by=var_names, ascending=True) Y.reset_index(drop=True, inplace=True) return Y Basic tidying transformations: Implementing Melting and Casting Given a data set and a target set of variables, there are at least two common issues that require tidying. Melting First, values often appear as columns. Table 4a is an example. To tidy up, we want to turn columns into rows: Because this operation takes columns into rows, making a “fat” table more tall and skinny, it is sometimes called melting. To melt the table, we need to do the following. - Extract the column values into a new variable. In this case, columns "1999"and "2000"of table4need to become the values of the variable, "year". - Convert the values associated with the column values into a new variable as well. In this case, the values formerly in columns "1999"and "2000"become the values of the "cases"variable. In the context of a melt, let’s also refer to "year" as the new key variable and "cases" as the new value variable. Implement the melt operation as a function, def melt(df, col_vals, key, value): ... It should take the following arguments: df: the input data frame, e.g., table4in the example above; col_vals: a list of the column names that will serve as values; key: name of the new variable, e.g., yearin the example above; value: name of the column to hold the values. The next function implements the melt operation: def melt(df, col_vals, key, value): assert type(df) is pd.DataFrame df2 = pd.DataFrame() for col in col_vals: df1 = pd.DataFrame(df[col].tolist(), columns=[value]) #, index=df.country) df1[key] = col other_cols = list(set(df.columns.tolist()) - set(col_vals)) for col1 in other_cols: df1[col1] = df[col1] df2 = df2.append(df1, ignore_index=True) df2 = df2[other_cols + [key, value]] return df2 with the following output === table4a === === melt(table4a) === === table4b === === melt(table4b) === Casting The second most common issue is that an observation might be split across multiple rows. Table 2 is an example. To tidy up, we want to merge rows: Because this operation is the moral opposite of melting, and “rebuilds” observations from parts, it is sometimes called casting. Melting and casting are Wickham’s terms from his original paper on tidying data. In his more recent writing, on which this tutorial is based, he refers to the same operation as gathering. Again, this term comes from Wickham’s original paper, whereas his more recent summaries use the term spreading. The signature of a cast is similar to that of melt. However, we only need to know the key, which is column of the input table containing new variable names, and the value, which is the column containing corresponding values. Implement a function to cast a data frame into a tibble, given a key column containing new variable names and a value column containing the corresponding cells. Observe that we are asking your cast() to accept an optional parameter, join_how, that may take the values 'outer' or 'inner' (with 'outer' as the default). The following function implements the casting operation: def cast(df, key, value, join_how='outer'): """Casts the input data frame into a tibble, given the key column and value column. """ assert type(df) is pd.DataFrame assert key in df.columns and value in df.columns assert join_how in ['outer', 'inner'] fixed_vars = df.columns.difference([key, value]) tibble = pd.DataFrame(columns=fixed_vars) # empty frame fixed_vars = fixed_vars.tolist() #tibble[fixed_vars] = df[fixed_vars] cols = [] for k,df1 in df.groupby(df[key]): #tibble = pd.concat([tibble.reset_index(drop=True), df1[value]], axis=1) #print(df1[fixed_vars+[value]].head()) tibble = tibble.merge(df1[fixed_vars+[value]], on=fixed_vars, how=join_how) cols.append(str(k)) #list(set(df1[key]))[0]) tibble.columns = fixed_vars + cols return tibble with the following output: === table2 === === tibble2 = cast (table2, "type", "count") === Separating variables Consider the following table. In this table, the rate variable combines what had previously been the cases and population data. This example is an instance in which we might want to separate a column into two variables. Write a function that takes a data frame ( df) and separates an existing column ( key) into new variables (given by the list of new variable names, into). How will the separation happen? The caller should provide a function, splitter(x), that given a value returns a list containing the components. The following code implements the function: import re def default_splitter(text): """Searches the given spring for all integer and floating-point values, returning them as a list _of strings_. E.g., the call default_splitter('Give me $10.52 in exchange for 91 kitten stickers.') will return ['10.52', '91']. """ #fields = re.findall('(\d+\.?\d+)', text) fields = list(re.match('(\d+)/(\d+)', text).groups()) return fields def separate(df, key, into, splitter=default_splitter): """Given a data frame, separates one of its columns, the key, into new variables. """ assert type(df) is pd.DataFrame assert key in df.columns return (df.merge(df[key].apply(lambda s: pd.Series({into[i]:splitter(s)[i] for i in range(len(into))})), left_index=True, right_index=True)).drop(key, axis=1) with the following output: === table3 === === tibble3 = separate (table3, ...) === One thought on “Some Data Processing and Analysis with Python” Nice explanations. Thanks.
https://sandipanweb.wordpress.com/2017/11/11/some-data-processing-and-analysis-with-python/
CC-MAIN-2020-24
refinedweb
2,661
58.79
Background: when I'm not playing with my TOS machines, I sometimes play with (ARM based) microcontrollers. Recently, I discovered C++ as my new playground and saw that it is entirely possible (if not common already) to code µC programs in C++. Now I wanted to see if that isn't something the Atari world could benefit from as well. Let's assume (just to have something practical to discuss about) we need a program that writes out the squares of 1..100. Code: Select all #include <iostream> #include <cstdint> #include <array> #define LAST 101 struct squares { std::array<uint16_t, LAST> arr; squares(int num) { for (int i = 0; i < LAST; i++) { arr[i] = i * i; } } void printme() { for (auto const &value: arr) std::cout << value << ", "; } }; int main() { squares(100).printme(); } 394567 bytes. Yes. Nearly 400KB. Stripped, already. You'd probably easily be able to write this in assembler with less than, say, 1000 bytes and could still leave the symbol table in... (compiled with Code: Select all m68k-atari-mint-g++ -std=c++0x -o simple.prg -O2 -fomit-frame-pointer -s simple.cc Let's see how we can bring that down. First, we replace the std::cout call with printf(). iostreams have a lot of functionality we do not need here, let's see if this makes a difference. We replace printme() with: Code: Select all void printme() { for (auto const &value: arr) printf("%d, ", value); } Let's try libcmini (yes, that's the sole reason I wrote this post, actually Code: Select all m68k-atari-mint-g++ -std=c++0x -nostdlib -I../libcmini/build/include -o simple.prg -O2 -fomit-frame-pointer -s ../libcmini/build/startup.o simple.cc -L../libcmini/build -lgcc -lstdc++ -lcmini -lgcc 12736 bytes. But we can do even better. libcmini's printf() routine is still a bit overqualified for what we do here. Let's replace printme() again: Code: Select all void printme() { char outstr[30]; for (auto const &value: arr) { itoa(value, outstr, sizeof(outstr) - 1, 10); Cconws(outstr); Cconws(", "); } } Happy C++ing!
https://www.atari-forum.com/viewtopic.php?f=16&t=35441&p=369454
CC-MAIN-2020-50
refinedweb
343
72.76
#include <c4d_baselist.h> The base class of many classes within the API. Represents an object that can be read from and written to disk, copied and that has a description for the Attribute Manager. Gets the type of the atom. Gets the real type of the atom. This is similar to GetType(), but for multinodes the ID of the last linked part is returned. E.g. XPresso nodes have the type ID_GV_GROUPDATA or ID_GV_NODEDATA but GetRealType() returns the ID of the operator. Checks if the atom is an instance of a base type. Returns the base type of the atom; e.g. Obase for objects , Tbase for tags etc. Sends a message to the atom only. Sends a message to the atom and to its children, parents or branches, depending on flags. Retrieves a copy of the atom. Copies all values from *this to *dst. Reads the atom from a HyperFile, manually specifying id and disk level. Writes the atom to a HyperFile. Reads the atom from a HyperFile within another read operation. Writes the atom to a HyperFile, within another write operation. Gets the description for the atom. Gets a parameter of the atom. Sets a parameter of the atom. Gets the dynamic description of the atom. (Also known as the user data part of the Attribute Manager.) Checks if a description parameter should be disabled or enabled. Redirects description IDs between nodes. Gets the dirty checksum for the object. This can be used to check if the object has been changed. Sets the dirty checksum, the one returned by GetDirty(). Gets the dirty bits for the specified mask. Sets the dirty bits, the one returned by GetHDirty().
https://developers.maxon.net/docs/Cinema4DCPPSDK/html/class_c4_d_atom.html
CC-MAIN-2022-05
refinedweb
279
78.75
the problem i am having is i cannot get the pg.key.set_repeat() to work at all, i have googled everywhere and there is nothing useful in two full pages of google links. butts. my code is - Code: Select all import sys import os import pygame as pg from pygame.locals import * size = width, height = 640,480 screen = pg.display.set_mode(size) map1 = pg.image.load('map.png') character = pg.image.load('character.png') x,y = (200,200) dong = 0 pg.key.set_repeat(1,50) while dong == 0: for event in pg.event.get(): key_pressed = pg.key.get_pressed() screen.blit(map1, (0,0)) screen.blit(character, (x,y)) pg.display.update() pg.key.set_repeat(1, 10) if event.type == pg.KEYDOWN: if event.key == pg.K_w: y -= 3 elif event.key == pg.K_s: y += 3 elif event.key == pg.K_a: x -= 3 elif event.key == pg.K_d: x += 3
http://www.python-forum.org/viewtopic.php?p=7041
CC-MAIN-2015-48
refinedweb
149
73.74
manage vertex buffer objects shared within a context More... #include <vtkOpenGLVertexBufferObjectCache.h> manage vertex buffer objects shared within a context This class allows mappers to share VBOs. Specifically it is used by the V..B..O..Group to see if a VBO already exists for a given vtkDataArray. Definition at line 38 of file vtkOpenGLVertexBufferObjectCache.h. Definition at line 42 of file vtkOpenGLVertexBufferObjectCache.h. Definition at line 62 of file vtkOpenGLVertexBufferObjectCache.h. Return 1 if this class is the same type of (or a subclass of) the named class. Returns 0 otherwise. This method works in combination with vtkTypeMacro found in vtkSetGet.h. Reimplemented from vtkObjectBase. Returns the vertex buffer object which holds the data array's data. If such a VBO does not exist a new empty VBO will be created you need to append to. The return value has been registered, you are responsible for deleting it. The data array pointers are also registered. Removes all references to a given vertex buffer object. Definition at line 68 of file vtkOpenGLVertexBufferObjectCache.h.
https://vtk.org/doc/nightly/html/classvtkOpenGLVertexBufferObjectCache.html
CC-MAIN-2019-47
refinedweb
173
61.73
On Sat, 6 Feb 1999 19:35:05 -0500, gobbel@andrew.cmu.edu wrote: >Package: libc6-dev >Version: 2.0.100-2 > >On PowerPC, bits/endian.h apparently expects gcc to magically set __BIG_ENDIAN >or __LITTLE_ENDIAN, depending on which mode the processor is currently using. >From the source: > >/* PowerPC can be little or big endian. Hopefully gcc will know... */ That file has been truncated. It's supposed to read something like /* PowerPC can be little or big endian. Hopefully gcc will know... */ #ifdef __MIPSEB__ #define __BYTE_ORDER __BIG_ENDIAN #elif defined __MIPSEL__ #define __BYTE_ORDER __LITTLE_ENDIAN #else #error Unable to determine host byte order #endif -- and yes, gcc should be defining one of __MIPSEB__ or __MIPSEL__. zw
https://lists.debian.org/debian-glibc/1999/03/msg00034.html
CC-MAIN-2014-10
refinedweb
114
61.12
Odoo Help This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers. Why does a function field show [object Object] in the view? Hello I have a function field that returns a string. The string is supposed to be a url pointing to a pdf file on the server. I have the function created and I know that the correct url is getting generated. The problem is that string is not showing correctly on the view. The field on the view is using a 'url' widget so that it becomes a hyperlink that can be clicked The link looks like this right now: http://[object Object] I would like it to show something like this: Here is my python function: def _append_inv_url_2_uuid(self, cr, uid, ids, field, args, context=None): res = {} for rec in self.browse(cr, uid, ids, context): # look up the invoice url by the key name instead of the value. # this way if the value changes then no changes will have to be made here def_url_id = self.pool.get('ir.config_parameter').search(cr, uid, [('key','=','Invoice uuid URL')])[0] # get the value of the url key url_str = self.pool.get('ir.config_parameter').browse(cr, uid, def_url_id).value # get the uuid from the sale order uuid = "so?id=%s" % self.browse(cr, uid, ids[0]).uuid whole_string = "%s%s" % (url_str, uuid) res.update({rec.id: {'invoice_url': str(whole_string)}}) return res Here is the field in the form view: <field name="invoice_url" widget=!
https://www.odoo.com/forum/help-1/question/why-does-a-function-field-show-object-object-in-the-view-61494
CC-MAIN-2016-44
refinedweb
264
73.78
16 September 2009 10:50 [Source: ICIS news] SINGAPORE (ICIS news)--China’s linear low density polyethylene (LLDPE) and polyvinyl chloride (PVC) futures remained largely stable on Wednesday in the absence of significant fresh market developments, traders and producers said. November LLDPE futures, the most actively traded contract on Tuesday, closed at yuan (CNY) 10,040/tonne ($1,470/tonne), or 0.5% higher than Tuesday’s settlement, according to data from the Dalian Commodity Exchange (DCE). The November PVC contract meanwhile moved by a same percentage, closing at CNY6,780/tonne from Tuesday’s settlement of CNY6,745/tonne. PVC futures players shrugged off price movements in the crude and domestic stock markets, preferring to focus instead on the extremely thin trading and stable discussion levels heard in the local PVC spot market, a local PVC producer said. Crude benchmarks in ?xml:namespace> ($1 = CNY6.83) For more on PE,
http://www.icis.com/Articles/2009/09/16/9247729/chinas-polymers-futures-trading-at-dce-largely-stable.html
CC-MAIN-2014-10
refinedweb
151
54.32
I can't sign up because Slingshot does not accept email addeses in the .co namespace, then I can't speak to a human because there are layers and layers of electronic menus, then finally, I decided to send an email to let them know I was on hold thinking that Godzilla had risen in Auckland and was stomping on call centres. The best part of this experience - If you call from a mobile phone and the service automatically detects that you are calling from a mobile phone it says it will hang up and call you back. It then hangs up, no choice to you. Sometimes it calls back. Seriously? WTF? I did hang on forever this morning to tell them to please cancel my request and make sure ther are no transactions or fees as I wouldn't join even if I was paid to. Customer service epic fail.
https://www.geekzone.co.nz/forums.asp?ForumId=81&TopicId=100671
CC-MAIN-2017-26
refinedweb
151
75.44
Created on 2016-04-09 12:19 by martin.panter, last changed 2016-06-29 12:06 by martin.panter. This issue is now closed. This is a follow-on from Issue 24291. Currently, for stream servers (as opposed to datagram servers), the wfile attribute is a raw SocketIO object. This means that wfile.write() is a simple wrapper around socket.send(), and can do partial writes. There is a comment inherited from Python 2 that reads: # . . . we make # wfile unbuffered because (a) often after a write() we want to # read and we need to flush the line; (b) big writes to unbuffered # files are typically optimized by stdio even when big reads # aren't. Python 2 only has one kind of “file” object, and it seems partial writes are impossible: <>. But in Python 3, unbuffered mode means that the lower-level RawIOBase API is involved rather than the higher-level BufferedIOBase API. I propose to change the “wfile” attribute to be a BufferedIOBase object, yet still be “unbuffered”. This could be implemented with a class that looks something like class _SocketWriter(BufferedIOBase): """Simple writable BufferedIOBase implementation for a socket Does not hold data in a buffer, avoiding any need to call flush().""" def write(self, b): self._sock.sendall(b) return len(b) Here is a patch with my proposal. Hum, since long time ago, Python has issues with partial write. It's hard to guess if a write will always write all data, store the data on partial write, or simply ignore remaining data on partial write. I recall a "write1" function which was well defined: limited to 1 syscall, don't try (or maybe only on the very specific case of EINTR). But I'm not sure that it still exists in the io module of Python 3. asyncio has also issues with the definition of "partial write" in its API. You propose to fix the issue in socketserver. socket.makefile(bufsize=0).write() uses send() and so use partial write. Are you sure that users are prepared for that? Maybe SocketIO must be modified to use sendall() when bufsize=0? FYI I recently worked on a issue with partial write in eventlet on Python 3: * * By the way, I opened #26292 "Raw I/O writelines() broken for non-blocking I/O" as a follow-up of the eventlet issue. > I recall a "write1" function which was well defined: limited to 1 syscall, don't try (or maybe only on the very specific case of EINTR). But I'm not sure that it still exists in the io module of Python 3. Oops, in fact it is read1: "Read and return up to size bytes, with at most one call to the underlying raw stream’s read()" If a user calls makefile(bufsize=0), they may have written that based on Python 2’s behaviour of always doing full writes. But in Python 3 this is indirectly documented as doing partial writes: makefile() args interpreted as for open, and open() buffering disabled returns a RawIOBase subclass. People porting code from Python 2 may be unprepared for partial writes. Just another subtle Python 2 vs 3 incompatibility. People using only Python 3 might be unprepared if they are not familar with the intricacies of the Python API, but then why are they using bufsize=0? On the other hand, they might require partial writes if they are using select() or similar, so changing it would be a compatibility problem. You could use the same arguments for socketserver. The difference is that wfile being in raw mode is not documented. Almost all of the relevant builtin library code that I reviewed expects full blocking writes, and I did not find any that requires partial writes. So I think making the change in socketserver is less likely to introduce compatibility problems, and is much more beneficial. Merged with current code, and copied the original wsgiref test case from Issue 24291, since this patch also fixes that bug. Merged with current code, and migrated the interrupted-write test from test_wsgiref into test_socketserver. Forgot to remove the workaround added to 3.5 for wsgiref in Issue 24291 New changeset 4ea79767ff75 by Martin Panter in branch 'default': Issue #26721: Change StreamRequestHandler.wfile to BufferedIOBase Committed for 3.6.
https://bugs.python.org/issue26721
CC-MAIN-2017-13
refinedweb
715
72.56
Comment: Music (Score 5, Insightful) 676. Comment: Re:Great! Depending on Mono is a mistake (Score 1) 255 I can't stand C++ or Python (block braces or death), so how do you suggest I code cross-platform apps? C++ and Python aren't even remotely similar to C#, Java is. Java is a far superior cross platform language than C#, which only has decent performance under Windows. Comment: Re:StreetScooter (Score 1) 151 most people disparage them because of the stereotypes So what you're saying is, sales and marketing need an image boost? Comment: Re:the way to go (Score 1) 743 A simple trick for doing a hash map for integer keys is just to modulus each key into the size you want the hash map. So if you want to chuck a bunch of values into an array, you just go: array[key%array.size] = value Only problem is that I often forget what year it is when I'm under stress. Comment: Re:the way to go (Score 1) 743 The candidate who invents as he goes and has algorithms named after him is the one you want. But the guy who hacks together a quadratic time algorithm when Google can give him a perfectly good linear one certainly isn't. Comment: Re:Solution: (Score 1) 327 my scraggly brown locks do nothing to protect me Perhaps you need to specify 'ask me biology questions in my journal'? Probably still not specific enough. Comment: Re:Welcome to real world (Score 2) 542. Comment: Re:Solved Problem (Score 1) 277 Yeah, we should stop using the French as poster boys for fission, they have a terrible track record. Comment: Re:Psychology is a science. (Score 1) 254 The problem with the social sciences, and especially psychiatry, psychology and economics, is the massive amount of influence they have over public policy. They may have good and repeatable studies that completely contradict each other, from which politicians and appointed officials then cherry pick the studies that align with their viewpoint. Don't like social welfare because of your Protestant values? Lets go with the Chicago School economics. Control freak? Lets justify increased economic control with Keynesian economics. Bigoted against Homosexuality? Let's not forget that it's a "paraphilia". We mustn't forget that moral preconceptions can and do constantly reflect in the conclusions drawn from the social sciences, far more so than any other academic area. Perhaps we should add another dimension to the categorisation of sciences alongside the soft/hard category; hot/cool, as in headed. Comment: Re:Eclipse is like DOS, for Windows (Score 1) 90. Comment: Re:Journalists and Math (Score 1) 94 The Brits generally use the short scale billion now. Comment: Re:Why not learn a real languange? (Score 1) 107 make a console calculator import sys print eval(sys.argv[1]) Comment: Re:Properly traine software testers (Score 1) 180 I'm sorry, but you just failed the Turing test. Comment: Re:Well, 85% of scientists are wrong, then. (Score 1) 1345 How the tables have turned since the middle ages. I'm pulling this out of my ass, but I would say that the percentage of Muslims that believe in evolution are similar to Christians, Jews etc. It's just that the idiots are the ones that shout the loudest.
http://slashdot.org/~internettoughguy
CC-MAIN-2015-22
refinedweb
560
63.8
MapCSS/0.1 Contents Play with it! The easiest way to get acquainted with MapCSS is to play with it. You can have a go at . Just edit the stylesheet on the right and click 'Refresh' to see your changes. You'll find it instantly accessible if you've ever written any CSS for a webpage before. (Halcyon was unavailable when checked on 2013-01-29.) This is a first draft. See MapCSS/0.2 for improvements under discussion to the MapCSS spec. Vocabulary Line properties Point/icon properties Label properties Shield properties Shield properties are not yet supported by Halcyon but are likely to be shield-image and shield-text. Grammar Selectors - elements The selector elements are the core OSM objects: - node - way - relation When a rule applies to two selectors, we can group them with a comma: way[highway=primary], way[highway=trunk] The descendant combinator is used when one object is contained in another. For example, a way in a relation, or a node in a way. way[highway=footpath] node[barrier=gate] { icon-image: icons/gate.png; } relation[type=route] way[highway] { stroke-color: red; } We can be liberal in how we treat white space: the latter would be unambiguous as relation [type=route] way [highway] { stroke-color: red; } Selectors - tests Basic tests are added as you'd expect: way[highway=primary] /* way with the highway tag set to primary */ You can also simply test whether a tag is set or not, in one of two ways: way[highway] /* way with the highway tag set */ way[!highway] /* way with the highway tag not set */ way .highway /* way with the highway tag set (see pseudo-classes below) */ A wide range of attribute selectors (special conditions) are supported: way[highway=~/primary/] /* regex */ way[!highway] /* way without a highway tag */ node[place=city][population<50000] node[place=city][population>=50000] and in a little bit of invisible magic, way[oneway=yes] also looks for =true or =1. The test for a zoom level works well using the CSS namespace selector: way|z12-[population<50000] way|z1-11[population>50000] and of course, if it's not specified, it applies to all zoom levels. Theoretically this could be extended to cope with other units of measurement, e.g. way|s50000 for a 1:50,000 printed map. The role in a relation is exposed via the 'role' pseudo-tag, so you can test for: relation[type=route] way[role=proposed] { color: blue; } (This is not yet supported by Halcyon.) Declarations Declarations are almost exactly as you'd expect, echoing common CSS vocabulary where possible - see above. Additional declarations: { exit; } /* stops parsing (just a performance optimisation) */ { set tag=value; } /* set a tag */ { set tag; } /* set a tag to 'yes' */ You can compute values using an eval instruction: { opacity: eval('population/100000'); } { set width_in_metres=eval('lanes*3'); } (Note that Halcyon currently only supports numeric eval, though string eval is planned.) Halcyon has the ability to 'stack' renderings for a particular rule, so you can have several stokes for a particular way. This means you can have more than one declaration block: way|z12[highway=primary] { z-index: 0; width: 5px; stroke-color: red; } { z-index: 1; width: 1px; stroke-color: black; dashes: 1, 3, 2, 3; } Classes You can set tags beginning with a full-stop to simulate classes: { set .minor_road; } You can then use the .minor_road test (as above) in the selector: way .minor_road { width: 2pt; color: yellow; }. The :area pseudo-class applies if a way is 'closed' (first and last nodes are the same).
http://wiki.openstreetmap.org/wiki/MapCSS/0.1
CC-MAIN-2017-09
refinedweb
594
52.8
#include <sys/socket.h> #include <sys/sysctl.h> #include <sys/time.h> #include <net/if.h> #include <net/if_mib.h> mani- fest); } ifmd_data (struct if_data) more information from a structure defined. For IFT_SLIP, the structure is a ``struct sl_softc'' (<net/if_slvar.h>). sysctl(3), intro(4), ifnet(9) F. Kastenholz, Definitions of Managed Objects for the Ethernet-like Interface Types Using SMIv2, August 1994, RFC 1650. Many Ethernet-like interfaces do not yet support the Ethernet MIB; the interfaces known to support it include ed(4) and de(4). Regardless, all interfaces automatically support the generic MIB. The ifmib interface first appeared in FreeBSD 2.2. BSD November 15, 1996 BSD
http://www.syzdek.net/~syzdek/docs/man/.shtml/man4/ifmib.4.html
crawl-003
refinedweb
111
55.3
... No matter how hard we try and avoid it, sometimes we have no other option but to introduce a breaking change when we fix an issue. And this was the case with fixing the long-standing issue of the length of time taken to save an ASP.NET page at design-time. To summarize: when you save an ASP.NET page containing many of our controls from within Visual Studio it can take a long time, even several minutes. Given that we encourage developers to create a rich user interface by using our controls, this situation can be extremely frustrating. The problem is caused by the way we had separated our controls into different namespaces, even within the same assembly. In that situation, we discovered that Visual Studio will iterate through every class and namespace several times as it tries to resolve references. This process can take some time. For more detail about the problem, please see the following support ticket: Saving an ASP.NET page at design time is very slow in Visual Studio 2013, 2012, 2010 The core issue with 'slow saving pages' has to do with how the DevExpress namespaces are registered. We found that if controls belong to different namespaces within the same assembly then Visual Studio will iterate through every class several times (for internal purposes). To solve this issue, we merged namespaces in DevExpress.Web.vX.X assembly in the v14.2 release to prevent additional iterations. Of course, this fix means that the namespaces that are no longer being used must be removed from your projects. Rather than expect you to manually change each and every one of your projects (it’s doable, but very tedious), we’ve updated the DevExpress Project Converter tool in v14.2 to do it for you. The tool will not only make sure the assembly references are updated in your projects, but for v14.2 it will also rename any affected classes and move them into the new namespace, and remove the unused ones. Using the Project Converter is the simplest and most robust way to mitigate this particular breaking change. But what if you have an issue with the upgrade?!? Although we have tested the Project Converter and its fixes for this issue on many of our internal projects, including demos, it could be that there is some use case that it might fail with. If this happens to you, please don’t hesitate to contact our support team. There are two ways of doing that: Either way, we will respond as soon as possible. We want to ensure that you are happy with the new features in v14.2 and can use it profitably. Mehul HarryEmail: mharry@devexpress.comTwitter: ). Thanks Mehul, I always figured that was just my crappy markup that caused the saving taking forever. Glad to know it's not just me :) Yikes! I wasn't aware about Visual Studio having problems with controls within different namespaces either... It's something I'll want to keep in mind in the future. @Nate :) @Crono, Generally the issue was when you had a page with many/many controls. #KeepItLight Thanks for correcting this. I actually stopped using the DX web controls on new projects just because I couldn't stand the slowdown. It'll be nice to have them back in my toolkit! I have been using the Beta for about 1 week. I can confirm that the issue has been minimized. I would estimate over 90% in time. Saving is very robust now. Please or to post comments.
https://community.devexpress.com/blogs/aspnet/archive/2014/11/19/improving-the-asp-net-design-time-experience-breaking-change-in-v14-2.aspx
CC-MAIN-2019-43
refinedweb
594
65.83
CStringFile Class Environment: VC5/6, NT4, CE 2.11 Once upon a day I was asked to write an application which could do some serious filtering, on a ';' seperatedfile which held multiple collumns. The initial file was about 7 Mb in size, so I had to create a program which could read a file, process the read lines and produce a new output file. So you think, what's the big deal ? When I wrote that program, a pentium 133 was considered a fast PC. As far as I knew, there where no generic (Microsoft provided ??) solutions for a simple task as reading text from a file. So I build my own textfile reading class. This brings me to the part which I found most interesting, the first version of this filtering program, did the job in several seconds. So how come the thing worked so fast ? By the optimal manner of reading a text file. The StringFile class itself consists of 2 loops. One loop is for filling a read buffer, and the other loop is used for reading a line from this buffer. The effect of these loops is that when the file is read, its is done by reading 2k of data per turn. Further processing (finding where any given line starts and when it stops) is done inside memory, not on disk. And, as you can probably known, memory is faster then disk...so there's my explanation for the speed of the filter program. After some fiddling around with this class I decided to post this, so that everyone can enjoy this piece of code. #include "StringFile.h" BOOL ReadTextFile(LPCSTR szFile) { CStringFile sfText; CString szLine; BOOL bReturn = FALSE; // When the given file can be opened if(sfText.Open(szFile)) { // Read all the lines (one by one) while(sfText.GetNextLine(szLine)!=0) { printf("%s\r\n",szLine); //And print them } sfText.Close(); // Close the opened file bReturn = TRUE; // And say where done succesfully } return bReturn; } Some benchmarking (trying to find optimum blocksize for reading) gave me the following results: This shows that the optimum size for this piece of code lies around 2k blocksize. Increasing this blocksize doesn't speedup reading, the only thing speeding up the read actions is probably improving the used code.
http://www.codeguru.com/cpp/w-p/files/fileio/article.php/c4477/CStringFile-Class.htm
CC-MAIN-2015-18
refinedweb
380
72.46
getrlimit, setrlimit - control maximum resource consumption #include <sys/resource.h> int getrlimit(int resource, struct rlimit *rlp); int setrlimit(int resource, const struct rlimit *rlp); Limits on the consumption of a variety of resources by the calling process may be obtained with getrlimit() and set with setrlimit().>, is considered to be larger than any other limit value. If a call to getrlimit() returns RLIM_INFINITY for a resource, it means the implementation does not enforce limits on that resource. Specifying RLIM_INFINITY as any resource limit value on a successful call to setrlimit() inhibits enforcement of that resource limit. The following resources are defined: - RLIMIT_CORE - This is the maximum size of a core file in bytes that may be created by a process. A limit of 0 will prevent the creation of a core file. If this limit is exceeded, the writing of a core file will terminate at this size. - RLIMIT_CPU - This is the maximum amount of CPU time in seconds used by a process. If this limit is exceeded, SIGXCPU is generated for the process. If the process is catching or ignoring SIGXCPU, or all threads belonging to that process are blocking SIGXCPU, the behaviour is unspecified. - RLIMIT_DATA - This is the maximum size of a process' data segment in bytes. If this limit is exceeded, the brk(), malloc() and sbrk() functions will fail with errno set to [ENOMEM]. - RLIMIT_FSIZE - This is the maximum size of a file in bytes that may be created by a process. If a write or truncate operation would cause this limit to be exceeded, SIGXFSZ is generated for the thread. If the thread is blocking, or the process is catching or ignoring SIGXFSZ, continued attempts to increase the size of a file from end-of-file to beyond the limit will fail with errno set to [EFBIG]. - RLIMIT_NOFILE - This is a number one greater than the maximum value that the system may assign to a newly-created descriptor. If this limit is exceeded, functions that allocate new file descriptors may fail with errno set to [EMFILE]. This limit constrains the number of file descriptors that a process may allocate. - RLIMIT_STACK - This is the maximum size of a process' stack in bytes. The implementation will not automatically grow the stack beyond this limit. If this limit is exceeded, SIGSEGV is generated for the thread. If the thread is blocking SIGSEGV, or the process is ignoring or catching SIGSEGV and has not made arrangements to use an alternate stack, the disposition of SIGSEGV will be set to SIG_DFL before it is generated. - RLIMIT_AS - This is the maximum size of a process' total available memory, in bytes. If this limit is exceeded, the brk(), malloc(), mmap() and sbrk() functions will fail with errno set to [ENOMEM]. In addition, the automatic stack growth will fail-dependent. For example, some implementations permit a limit whose value is greater than RLIM_INFINITY and others do not. The exec family of functions also cause resource limits to be saved. Upon successful completion, getrlimit() and setrlimit() return 0. Otherwise, these functions return -1 and set errno to indicate the error. The getrlimit() and setrlimit() functions will behaviour may occur. None. brk(), exec, fork(), malloc(), open(), sbrk(), sigaltstack(), sysconf(), ulimit(), <stropts.h>, <sys/resource.h>.
http://man.remoteshaman.com/susv2/xsh/getrlimit.html
CC-MAIN-2019-22
refinedweb
540
54.73
GatsbyJS - Add Dynamic Images from Data Information drawn from In Part 3, you used gatsby-plugin-image to add static images to your home page. Now that you’ve worked a bit more with Gatsby’s data layer, it’s time to revisit gatsby-plugin-image. This time, you’ll learn how to add dynamic images to your site. In this part of the Tutorial, you’ll use the dynamic GatsbyImage component to add hero images to each of your blog posts. By the end of this part of the Tutorial, you will be able to: - Use the GatsbyImage component to create images dynamically from data. What’s the difference between GatsbyImage and StaticImage? Back in Part 3 of the Tutorial, you learned about how to use the StaticImage component from gatsby-plugin-image. How do you know whether to use the StaticImage component or the GatsbyImage component? The decision ultimately comes down to whether or not your image source is going to be the same every time the component renders. The StaticImage component is for static image sources, like a hard-coded file path or remote URL. In other words, the source for your image is always going to be the same every time the component renders. The GatsbyImage component is for dynamic image sources, like if the image source gets passed in as a prop. In this part of the Tutorial, you’ll add a hero image to your blog post page template. You’ll also add some frontmatter data to your blog post .mdx files to specify which image to use for each post. Since the image source you’ll load in the page template will change for each blog post, you’ll use the GatsbyImage component. ## Add hero images to blog post frontmatter Many blog sites include a hero image at the top of each post. These images are usually large, high-quality photos that are meant to grab the reader’s attention and make them want to stay on the page longer. The steps below will help you find and download some photos for your hero images and add them to the frontmatter for each of your blog posts. Start by organizing the blog directory with all your MDX posts. First, create a new subdirectory in your blog folder for each post. Then, rename each of your .mdx files to index.mdx (to prevent the routes from ending up with a duplicated path parameter, like blog/my-post/my-post/). For example, a post at blog/my-first-post.mdx would move to blog/my-first-post/index.mdx. Similarly, a post at blog/another-post.mdx would move to blog/another-post/index.mdx. Note: After you move or rename your .mdx files, you’ll need to stop and restart your local development server for your changes to be picked up. Use a website like Unsplash to find some pretty, freely usable images. For best results, choose photos with a landscape (horizontal) orientation, since those will fit on your screen more easily. When you’ve found a photo that you like, download it and add it to subdirectory for one of your blog posts. Continue downloading photos until you have a different hero image for each post. Pro Tip: Sometimes, the images you download from the internet can be a little too high quality. If you know your site will only ever render an image at 1000 pixels wide, there’s no point in having a source image that’s 5000 pixels wide. All those extra pixels mean extra work to process and optimize your images, which can slow down build times. As a general guideline, it’s a good idea to preoptimize your image files by resizing them to be no more than twice the maximum size they’ll be rendered at. For example, if your layout is 600 pixels wide, then the highest resolution image you will need is 1200 pixels (to account for 2x pixel density). For more detailed information, refer to the doc on Preoptimizing Your Images. Next, add some additional frontmatter fields to each of your blog posts: - hero_image: the relative path to the hero image file for that post - hero_image_alt: a short description of the image, to be used as alternative text for screen readers or in case the image doesn’t load correctly - hero_image_credit_text: the text to display to give the photographer credit for the hero image - hero_image_credit_link: a link to the page where your hero image was downloaded from blog/my-first-post/index.mdx --- title: "My First Post" date: "2021-07-23" hero_image: "./christopher-ayme-ocZ-_Y7-Ptg-unsplash.jpg" hero_image_alt: "A gray pitbull relaxing on the sidewalk with its tongue hanging out" hero_image_credit_text: "Christopher Ayme" hero_image_credit_link: "" --- ... blog/another-post/index.mdx Copyblog/another-post/index.mdx: copy code to clipboard --- title: "Another Post" date: "2021-07-24" hero_image: "./anthony-duran-eLUBGqKGdE4-unsplash.jpg" hero_image_alt: "A grey and white pitbull wading happily in a pool" hero_image_credit_text: "Anthony Duran" hero_image_credit_link: "" --- ... blog/yet-another-post/index.mdx --- title: "Yet Another Post" date: "2021-07-25" hero_image: "./jane-almon-7rriIaBH6JY-unsplash.jpg" hero_image_credit_text: "Jane Almon" hero_image_credit_link: "" --- ... Now that your hero images are set up, it’s time to connect them to the data layer so you can pull them into your blog post page template. Install and configure gatsby-transformer-sharp In order to use the GatsbyImage component, you’ll need to add the gatsby-transformer-sharp transformer plugin to your site. When Gatsby adds nodes to the data layer at build time, the gatsby-transformer-sharp plugin looks for any File nodes that end with an image extension (like .png or .jpg) and creates an ImageSharp node for that file. In the terminal, run the following command to install gatsby-transformer-sharp: npm install gatsby-transformer-sharp Add gatsby-transformer-sharp to the plugins array in your gatsby-config.js file. gatsby-config.js module.exports = { siteMetadata: { title: "My First Gatsby Site", }, plugins: [ // ...existing plugins "gatsby-transformer-sharp", ], } Since you’ve added gatsby-transformer-sharp to your site, you’ll need to restart your local development server to see the changes in GraphiQL. You’ll take a closer look at GraphiQL in the next step. Render hero image in the blog post page template With all the necessary tools in place, you’re ready to add your hero image to your blog post page template. Task: Use GraphiQL to build the query First, you’ll use GraphiQL to add the hero image frontmatter fields to the GraphQL query for your blog post page template. Open GraphiQL by going to localhost:8000/___graphql in a web browser. Start by copying your existing blog post page query into the GraphiQL Query Editor pane. Run it once to make sure everything is working correctly. Note: You’ll need to set up an object in the Query Variables pane with an id that matches one of your posts. Refer to Part 6 section on query variables if you need a refresher on how to set that up. query ($id: String) { mdx(id: {eq: $id}) { frontmatter { title date(formatString: "MMMM D, YYYY") } body } } { "data": { "mdx": { "frontmatter": { "title": "My First Post", "date": "July 23, 2021", }, "body": "..." } }, "extensions": {} } In the Explorer pane, check the boxes for the hero_image_alt, hero_image_credit_link, and hero_image_credit_text fields. When you run your query, you should get back a response something like the JSON object below. Note: Remember to scroll down to the blue frontmatter field, which is lower than the purple frontmatter: argument. query ($id: String) { mdx(id: {eq: $id}) { frontmatter { title date(formatString: "MMMM D, YYYY") hero_image_alt hero_image_credit_link hero_image_credit_text } body } } { "data": { "mdx": { "frontmatter": { "title": "My First Post", "date": "July 23, 2021", "hero_image_alt": "A gray pitbull relaxing on the sidewalk with its tongue hanging out", "hero_image_credit_link": "", "hero_image_credit_text": "Christopher Ayme" }, "body": "..." } }, "extensions": {} } Adding the hero_image field itself is a bit more involved. Within the hero_image field, toggle the childImageSharp field, and then check the box for the gatsbyImageData field. Now your query should look like this: query ($id: String) { mdx(id: {eq: $id}) { frontmatter { title date(formatString: "MMMM D, YYYY") hero_image_alt hero_image_credit_link hero_image_credit_text hero_image { childImageSharp { gatsbyImageData } } } body } } Pro Tip: How does GraphiQL know to add extra fields to the hero_image frontmatter field? When Gatsby builds your site, it creates a GraphQL schema that describes the different types of data in the data layer. As Gatsby builds that schema, it tries to guess the type of data for each field. This process is called schema inference. Gatsby can tell that the hero_image field from your MDX frontmatter matches a File node, so it lets you query the File fields for that node. Similarly, gatsby-transformer-sharp can tell that the file is an image, so it also lets you query the ImageSharp fields for that node. Run your query to see what data you get back in the response. It should mostly look like the response you got back before, but this time with an extra hero_image object: { "data": { "mdx": { "frontmatter": { // ... "hero_image": { "childImageSharp": [ { "gatsbyImageData": { "layout": "constrained", "backgroundColor": "#282828", "images": { "fallback": { "src": "/static/402ec135e08c3b799c16c08a82ae2dd8/68193/christopher-ayme-ocZ-_Y7-Ptg-unsplash.jpg", "srcSet": "/static/402ec135e08c3b799c16c08a82ae2dd8/86d57/christopher-ayme-ocZ-_Y7-Ptg-unsplash.jpg 919w,\n/static/402ec135e08c3b799c16c08a82ae2dd8/075d8/christopher-ayme-ocZ-_Y7-Ptg-unsplash.jpg 1839w,\n/static/402ec135e08c3b799c16c08a82ae2dd8/68193/christopher-ayme-ocZ-_Y7-Ptg-unsplash.jpg 3677w", "sizes": "(min-width: 3677px) 3677px, 100vw" }, "sources": [ { "srcSet": "/static/402ec135e08c3b799c16c08a82ae2dd8/6b4aa/christopher-ayme-ocZ-_Y7-Ptg-unsplash.webp 919w,\n/static/402ec135e08c3b799c16c08a82ae2dd8/0fe0b/christopher-ayme-ocZ-_Y7-Ptg-unsplash.webp 1839w,\n/static/402ec135e08c3b799c16c08a82ae2dd8/5d6d7/christopher-ayme-ocZ-_Y7-Ptg-unsplash.webp 3677w", "type": "image/webp", "sizes": "(min-width: 3677px) 3677px, 100vw" } ] }, "width": 3677, "height": 2456 } } ] } }, "body": "..." } }, "extensions": {} } If you take a closer look at the gatsbyImageData object on the hero_image.childImageSharp field, you’ll see that it contains a bunch of information about the hero image for that post: dimensions, file paths for the images at different sizes, fallback images to use as a placeholder while the image loads. All this data gets calculated by gatsby-plugin-sharp at build time. The gatsbyImageData object in your response has the same structure that the GatsbyImage component needs to render an image. Note: You might have noticed that the gatsbyImageData field in GraphiQL accepts several arguments, like aspectRatio, formats, or width. You can use these arguments to pass in extra data about how you want the Sharp image processing library to create your optimized images. These options are equivalent to the ones you would pass into the StaticImage component as props. For more information, see the gatsby-plugin-image Reference Guide. Task: Add hero image using GatsbyImage component Once you have your GraphQL query set up, you can add it to your blog post page template. Replace your existing page query with the query you built in GraphiQL that includes the hero image frontmatter fields. src/pages/blog/{mdx.slug}.js // imports const BlogPost = ({ data }) => { return ( // ... ) } export const query = graphql` query($id: String) { mdx(id: {eq: $id}) { body frontmatter { title date(formatString: "MMMM DD, YYYY") hero_image_alt hero_image_credit_link hero_image_credit_text hero_image { childImageSharp { gatsbyImageData } } } } } ` export default BlogPost Import the GatsbyImage component and the getImage helper function from the gatsby-plugin-image package. src/pages/blog/{mdx.slug}.js import * as React from 'react' import { graphql } from 'gatsby' import { MDXRenderer } from 'gatsby-plugin-mdx' import { GatsbyImage, getImage } from 'gatsby-plugin-image' import Layout from '../../components/layout' // ... Use the getImage helper function to get back the gatsbyImageData object from the hero_image field. src/pages/blog/{mdx.slug}.js // imports const BlogPost = ({ data }) => { const image = getImage(data.mdx.frontmatter.hero_image) return ( // ... ) } // ... Note: getImage is a helper function that takes in a File node or an ImageSharp node and returns the gatsbyImageData object for that node. You can use it to keep your code a little cleaner and easier to read. Without the getImage helper function, you’d have to type out data.mdx.frontmatter.hero_image.childImageSharp.gatsbyImageData (which is longer, but gives you back the same data). Use the GatsbyImage component from gatsby-plugin-image to render the hero image data. You should pass GatsbyImage two props: - image: the gatsbyImageData object for your hero_image field - alt: the alternative text for your image, from the hero_image_alt field src/pages/blog/{mdx.slug}.js return ( <Layout pageTitle={data.mdx.frontmatter.title}> <p>Posted: {data.mdx.frontmatter.date}</p> <GatsbyImage image={image} alt={data.mdx.frontmatter.hero_image_alt} /> <MDXRenderer> {data.mdx.body} </MDXRenderer> </Layout> ) Now, when you visit each of your blog post pages, you should see the corresponding hero image rendered before the body of your post! Task: Add image credit after hero image It’s important to give credit to people whose work you use in your own site. The last piece of including hero images to your site is to add a paragraph to give credit to the photographer. Pro Tip: Since the credit link goes to an external page (in other words, one that’s not part of your site), you can use the HTML tag instead of the Gatsby Link component. Remember, Gatsby’s Link component only gives performance benefits for internal links to other pages within your site. src/pages/blog/{mdx.slug}.js // imports const BlogPost = ({ data }) => { const image = getImage(data.mdx.frontmatter.hero_image) return ( <Layout pageTitle={data.mdx.frontmatter.title}> <p>{data.mdx.frontmatter.date}</p> <GatsbyImage image={image} alt={data.mdx.frontmatter.hero_image_alt} /> <p> Photo Credit:{" "} <a href={data.mdx.frontmatter.hero_image_credit_link}> {data.mdx.frontmatter.hero_image_credit_text} </a> </p> <MDXRenderer> {data.mdx.body} </MDXRenderer> </Layout> ) } export const query = graphql` ... ` export default BlogPost Syntax Hint: You might have noticed that there’s a {“ “} after the “Photo Credit:” text <p> tag. That’s to make sure that a space gets rendered between the colon (:) and the link text. Try removing the {“ “} and see what happens. The paragraph text should end up being “Photo Credit:Author”. ### Key takeaways - Use the StaticImage component if your component always renders the same image (from a relative path or a remote URL). - Use the GatsbyImage component if the image source changes for different instances of your component (like if it gets passed in as a prop). ------------------------------------------------------------------------ Last update on 04 Nov 2021 ---
https://codersnack.com/gatsbyjs-add-dynamic-images/
CC-MAIN-2022-33
refinedweb
2,357
54.63
I'm trying to sign an xml but this is showing an Id attribute in the Signature tag and the xmlns attribute is not appearing. This is the xml generated: I'm trying to sign an xml but this is showing an Id attribute in the Signature tag and the xmlns attribute is not appearing. This is the xml generated: I have one persistent, xml-enabled class. I need to convert objects of this class to XML. However I need to project each object in different ways (depending on where I send it), for example: Is there a way to do that with XML Adaptor? Hello everyone, I'm new to COS development. I'm trying to generate a simple XML file based on a query and save into my server. I'm looking for stuff to get it done, if anyone has a tutorial or a step-by-step post on how to do it. My difficulty is just in generating the XML file. Hello everyone! I need to have a ResultSet of type % SQL.Statement show its contents when it is trafficked in a message property by Business Process. I tried to use the % XML.DataSet type that inherits properties of type % XML.Adaptor, but did not work. Is there any other way to traffic as an object, other than within a Stream? Note: I can not traffic Streams and I will not be able to use Correlate in this case. We can load a CCDA xml document into SDA3 object. Once parsing SDA3 object, how do we determine from which XPATH from CCDA the specific SDA3 elements were mapped to. Is there any way? I For each instance of an XML-enabled class I want to control whether a property is projected as an empty element or not projected at all. With string properties, I can set the value as $c(0) for an empty element, or an empty string for no projection. But it looks like this isn't consistent for properties with type %Boolean or %Integer. With booleans $c(0) gets projected as "false". With integers $c(0) stays as a zero-width space in the XML output (which gives me a SAX parser error if I try to read it). Is there a way to do this that works for all datatype properties? I have a XML enabled persistent class and a XML representation of some object of this class (object ID is available). How can I use XML Reader (or some other mechanism) to automatically update this object? I have an Ens.StreamContainer which holds XML that was received, and I need to validate that against an XSD schema. The schema is very simple, only looking at the root element and maybe a couple other items to ensure the XML is what we expect before continuing the data flow. I have a class extends %Persistent & %XML.Adaptor It has 100 properties for example Now I do intend to create a xml schema that I can import in Ensemble->XMLSchemas I did try to use XMLExportToString and %XMLWriter.GetXMLString but didn't give me a proper schema. May be I am missing some small step Can someone pls help The same piece of data never throws this error on other operations. I am getting this error on one always . The same SDA container never throws error on other operation. ERROR #6901: XSLT XML Transformer Error: SAXParseException: invalid character 0x1C (Occurred in an unknown entity) Set xslt=##class(%Dictionary.XDataDefinition).%OpenId(..%ClassName(1)_"||Xmethod",-1,.tStatus) $$$ThrowOnError(tStatus) Set tStatus= ##class(%XML.XSLT.Transformer).TransformStream(myStream,xslt.Data,.OpStream) $$$ThrowOnError(tStatus) XData Xmethod { I want to generate one of the following xml data, but after I generate only de="", after I need to generate is de="DEX71.41.009.01" I need to create a document with a root like this: <?xml version="1.0" encoding="UTF-8"?> <RCMR_IN200002FI01 xmlns="urn:hl7-org:v3" ITSVersion="XML_1.0"> ... </RCMR_IN200002FI01> However, the CreateDocument in %XML.Document only allows namespace as an additional argument. I did override this method, but trying to do something like Do document.SetAttribute("ITSVersion",,"XML_1.0") only results an empty document with the <?xml> declation only. -Pasi- Intersytems documentation says not to hold entire SDA as object in In-memory. I am trying to achieve this in cache objects I am using 2014.1 here is the original code in C# and would like to convert this to cache here is my code first c# and cache follows I am trying to read an xml document using %XML.TextReader and that's is all well and l can get my elements values but would like to determine where the next record start on the xml without referring to the document path in essence would like to use the same method to read different xml docs. I would like to know if is there a way or a function that I can use to get the start and end of a record in xml as I would to get the start and end element.. Hello, I've been manipulating XML objects via Cache, but have had some difficulty understanding how to use the following method detailed within EnsLib.EDI.XML.Prop: Method choiceGetCount(Output pCount, pDOMPath As %String, pRef As %String) As %Status From what I've read when walking through the code for this method, it appears to count a listing of repeating XML elements. However, despite my attempts to search for examples or attempts to implement this function, I am unable to do so. I am trying to read a csv file and transfer it to a XML file without storing the objects to a database I have this code here doing the reading and have another method transferring the object read to a file but the reading one reads fine when it comes to the converting one I run to problems nothing happens any help appreciated I have a business service that brings in a xml virtual document to the production and also a csv service that brings in a csv file and have a process that transforms both to a xml output but I have a problem with the csv as it is giving me this error when I try to trans form it ```ERROR <Ens>ErrException: <PROPERTY DOES NOT EXIST>zOnRequest+1 ^EnsLib.MsgRouter.VDocRoutingEngine.1 *DocType``` I have read here followed the suggestion but now I do not get any errors but my m I'm using %SOAP.WebRequest to send SOAP requests: Currently the XML I send looks like this: <?xml version="1.0" encoding="UTF-8" ?> <SOAP-ENV:Envelope xmlns: <SOAP-ENV:Header> <SomeCustomHeader/> </SOAP-ENV:Header> <SOAP-ENV:Body><SomeCustomBody/></SOAP-ENV:Body> </SOAP-ENV:Envelope> However, I want XML to be generated differently: How can I tweak XML generation to achieve that? Hello, Has anyone ever had issues using target class HS.SDA3.Container within a data transformation where the CareProvider values do not populate?. Hello devs, I'm facing an issue with one of my business services, which basically grabs a XML from a webservice (which in turn reads the data from the caché database) and does some processing afterwards. The XML content (which is formed of some of the table fields values) contains a special character: ‘ (left single quote) hi, 1. Created a class(test) and added a classmethod(checkdata). 2. Assigned a object with xml. 3. Created a new class for response and initialized in classmethod(checkdata). 4. Created a new class for request parsing with list of object parameter. 5. While parsing xml in request for list of object, the result count is "0". But the xml has value for "2" object list. 6. XML has follow: when ever I pull the xml file from business services and its failed in business process and I am getting the below error. ERROR?
https://community.intersystems.com/tags/xml?filter=unanswered
CC-MAIN-2019-30
refinedweb
1,322
61.97
User Tag List Results 1 to 5 of 5 My observations and decisions on .NET I have completed my initial evaluation of the .NET environment. Coming from nearly 10 years developing in PHP, it was a daunting task, but I think there are other PHP developers out there trying to make the jump and might find my thoughts useful. Therefore, I decided to make this post. It also may shed a little light in some of my recent questions here on this forum. ---------- As I sat down to look over things, I knew I was in for a shock. The .NET environment is huge. Not only are there several programming languages to choose from, but a great many project types. On top of that, some have multiple frameworks that can be uesd leverage your language of choice. And that's just the tip of the iceburg. There's crystal reports, LINQ, mobile devices, and the list goes on. An important thing to realise in my movement to .NET is that I am not an empolyee of a company, but a guy, sitting at home. A friends and myself setup a freebsd box a fears ago, and I wrote us a small community website. We did things a certain way, and we had gotten used to it. So while I investigated thing .NET-side, I always kept those thing in mind. I had to constantly ask myself which elements of our older application should we keep intact, and which we should change. I never ruled out anything, despite personal leanings, but at the same time. I tried to see if there was a way to move as much over as we could with as little change as possible. For example, our localized phrases and templates were stored in xml files. This was a medium the non-developers were comfortable editing. While I was able to duplicate much of our old application, I began seeing other ways to do things (with the help of our good friends here at sitepoint), ways that may actually work out better in the long run. ---------- The first thing I played with was the WebForms framework. Many people confused this with .NET itself. They are not the same thing. WebForms are one of many frameworks that build on the .NET infrastructure. I found WebForms extremely easy to use, although I do have a history of OOP design, from well before my PHP days. I really liked the idea behind the template file and the code file being separate, but able to respond to each other. Controls such as the repeater made things a lot easier. In my older app, I had to load a template into a string, and while looping through a recordset, set variables that would be evaluated into it, then keep a growing array of these filled templates, then finally spit stuff out, all assembled. It wasn't elegant, but it worked. Now, all I have to do is define the template once, assign a Datasoure and call DataBind. WoW. The next element I looked at was how urls were handled. My older application Used a single point of entry file and took certain parameters, for example: index.php?cmd=CategoryEdit&id=42. From there was a pretty simple controller -> command -> view process. I wanted to see what I could do with .NET in that regard. One thing to keep in mind, is that my idea of the MVC framework added a layer that the ASP MVC framework didn't seem to have. In my old idiology, The Command only processed data, such as postback, and decided what view to render. The command would pass any models on to the view, where output is assembled. Due to how the webforms worked, I wondered if there was a good way to integrate both. I looked long enough at the ASP MVC framework to understand how it was put together, and came to the conlusion that I would be able to come to a happy medium between the two. ---------- The first thing I did was enabled routing in my "test" application. This was done with the System.Web.Routing namespace included with the Visual Studio Professional distribution. The MVC libraries weren't required for what I wanted to do. After enabling that, I added some code to my Global.asax file, in the Application_OnStart event handler to add my one route, compelete with constraints and defaults. It isn't complete of course, just enough to test with. Code VBNET: Sub Application_Start(ByVal sender As Object, ByVal e As EventArgs) Dim routeDefaults As RouteValueDictionary Dim routeConstraints As RouteValueDictionary Dim routeDefinition As Route routeDefaults = New RouteValueDictionary() routeDefaults.Add("command", "Index") routeDefaults.Add("action", "View") routeDefaults.Add("id", 0) routeConstraints = New RouteValueDictionary() routeConstraints.Add("command", "Index|Category") routeConstraints.Add("action", "View|Add|Edit|Remove") routeConstraints.Add("id", "\d+") routeDefinition = New Route("{command}/{action}/{id}", routeDefaults, routeConstraints, New DefaultRouteHandler) RouteTable.Routes.Add("DefaultRoute", routeDefinition) End Sub Next, I created the custom RouteHandler that would be handling the requests. At first, I tried also creating custom httphandlers and run the correct one based on the request parameters. But that seemed a little overkill when the Webforms were already well blessed handlers, and I didn't want to have to reinvent templating. So I wound up with a simple mapper, that hooked requests to a physical aspx page. I used this RouteHandler to simply enforce a more readable url, moreso than control program flow. You can see the code below, and if you pay attention, you'll see where I took advantage of the context.items collection to pass the url segments on to the page. Code VBNET: Public Class DefaultRouteHandler : Implements IRouteHandler Public Function GetHttpHandler(ByVal requestContext As RequestContext) As IHttpHandler Implements IRouteHandler.GetHttpHandler Dim virtualPath As String requestContext.HttpContext.Items("command") = requestContext.RouteData.Values("command") requestContext.HttpContext.Items("action") = requestContext.RouteData.Values("action") requestContext.HttpContext.Items("id") = requestContext.RouteData.Values("id") virtualPath = New String("~/App_Pages/{command}{action}.aspx") virtualPath = virtualPath.Replace("{command}", requestContext.RouteData.Values("command")) virtualPath = virtualPath.Replace("{action}", requestContext.RouteData.Values("action")) Return BuildManager.CreateInstanceFromVirtualPath(virtualPath, GetType(Page)) End Function End Class At this point, I created a test page in my App_Pages folder, named CategoryEdit.aspx. In it, I placed three literals, and set their values in Page_Load event handler like such: Code VBNET: Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load CommandValue.Text = "Command: " + Context.Items("command") ActionValue.Text = "Action: " + Context.Items("action") IDValue.Text = "ID: " + Context.Items("id") End Sub When the page was called, using /Category/Edit/42, the correct output was displayed. ---------- Now, all this was just a test, an exercise in my getting to know my way around .NET more, and wasn't meant to "impress" or "wow" anybody. It certainly isn't rocket science. But I just wanted to share this in case anybody found it useful. ---------- Thanks!
http://www.sitepoint.com/forums/showthread.php?607947-My-observations-and-decisions-on-NET
CC-MAIN-2016-18
refinedweb
1,154
58.18
Well, version 1.0 of Silverlight is now released amid much fanfare, and is heavily in use on sites such as ET Online’s Emmy Awards homage and the Halo 3 game guide. The common factor on these sites is the way that Silverlight is being used to provide a rich user experience with cool animations and neat video features, all bundled into a small plug-in package. So at this point it’s certainly tempting to pigeon-hole Silverlight as being a budget media-player application, but there’s actually a whole lot more to it than that. And to prove just how much more there is, in this article we’re going to look at one of the more niche uses of Silverlight as a “better” JavaScript. The Calculator Figure 1 shows a simple, browser-based calculator. There are no videos playing on the buttons, nor are there any funky animations: it’s just a simple, postfix notation calculator that can perform basic arithmetic functions. Figure 1: The Silverlight Calculator Clearly, if you were tasked to write a calculator like this for the browser, you would expect to handle all of the interaction on the client-side using JavaScript: there’s certainly no need for AJAX or ASP.NET UpdatePanels, and definitely no need for a full page round-trip cycle. But what if you’re actually not that good with script, and certainly have minimal exposure to the subtle nuances of how script works on different browsers? You could rush out and buy a good book on JavaScript programming, but you might already be a skilled Visual Basic or C# developer. Is there a way of leveraging those skills on the browser client? There certainly is: it’s called Silverlight 1.1. Silverlight 1.1Silverlight 1.1 offers you an interesting alternative to writing JavaScript, at least for some of the most common browsers. Unlike its 1.0 predecessor, the Silverlight 1.1 plug-in supports a CLR – yes, full managed code – on the Mac OS X and Windows platforms, assuming that you’re running Safari, FireFox or Internet Explorer. In the coming months you will also see support on Linux, so you’re pretty well covered for the vast majority of users. Of course, Silverlight 1.1 supports all the fancy media-centric, animation and other funky XAML-based features of the earlier release, but I won’t be focusing on them in this article. agclr, mscorlib and friendsOnce you’ve headed over to the Silverlight site, followed the links and installed the Silverlight SDK, you’ll find that you have a new CLR installed on your computer. Don’t bother looking for it in the traditional %WINDIR%/Microsoft.NET folder, because you won’t find it there. Instead, open up the folder where you installed the SDK (in my case C:\Program Files\Microsoft Silverlight) and you will find a collection of assemblies such as mscorlib.dll, agclr.cll, system.dll and system.silverlight.dll. This collection of assemblies represents the cut down, but highly functional CLR and .NET Framework Class Library that you can use within your Silverlight application. With full support for C#, Visual Basic and even with new dynamic language support, you’re going to feel completely at home writing the same style of code that you would previously have written for Windows Forms, server-side ASP.NET or WPF. The Framework Class Library that is provided with Silverlight is somewhat limited in scope, but with support for generic collections, I/O, threading and Web Service requests, it is still highly capable. Currently, one of the best ways of seeing precisely what’s available is to use Lutz Roeder’s now famous Reflector. This will provide you complete insight into the full range of supported features in the library, although you must bear in mind that this is only an alpha release and is thus subject to considerable change. Security and the Silverlight sandboxIt’s certainly great having all of this code available for use on the client, but how does it impact upon security? Silverlight runs inside a very stringent sand-box that will, for example, prohibit I/O access other than through Isolated Storage unless authorised by the user via the common file dialogs. The Calculator.aspx pageLet’s start off by looking at the mark-up for the Calculator page, Calculator.aspx, which is shown in Figure 1. <%@ Page Language=”C#” AutoEventWireup=”true” CodeFile=”Calculator.aspx.cs” Inherits=”CalculatorPage” %> <!DOCTYPE html … > <html xmlns=””> <head runat=”server”> <title>Silverlight Calculator</title> <style type=”text/css”> input[type=button] { } input[type=text] { } </style> <script type=”text/javascript” src=”Silverlight.js”></script> <script type=”text/javascript” src=”Default.aspx.js”></script> </head> <body> <form id=”form1” runat=”server”> <div … > <table border=”0” cellpadding=”0” cellspacing=”0”> <tr> <td colspan=”3”> <input type=”text” id=”txtValue” style=”width: 150pt;“ readonly=”readonly” /></td> <td> <input type=”button” id=”btnReset” value=”CE” /></td> </tr> <tr> <td> <input type=”button” id=”btn7” value=”7” /></td> <td> <input type=”button” id=”btn8” value=”8” /></td> <td> <input type=”button” id=”btn9” value=”9” /></td> <td> <input type=”button” id=”btnAdd” value=”+” /></td> </tr> <!-- Other rows of buttons elided --> … </form> <div id=”SilverlightControlHost”> <script type=”text/javascript”> createSilverlight(); </script> </div> </body> </html>The main things to note here, other than the expedient but ugly use of a <table> for the layout, are as follows: - I’m using standard client-side HTML controls for the UI. This was by design for this article, in order to reinforce the fact that the managed code executed by the Silverlight CLR is operating on the client. You can equally well use ASP.NET server-side controls so that you can program against them using managed code on both the client and the server. - The highlighted script files, and the <div> with the SilverlightControHost id, are essential to enabling Silverlight. As discussed in the previous articles, this is how the Silverlight plug-in creates your Silverlight content on the page. Even though we’re not going to be providing a user interface in Silverlight, we still need these items in order to pull down and execute the code. Enabling HTML interactivityThe purpose of this article is to demonstrate how managed code can interact with the HTML elements on the page. To achieve this, you obviously have to identify the elements, which I’ve done in this case by specifying their id attribute. However, you also need to make sure that your Silverlight code is allowed access to them. This involves setting some parameters within the createSilverlight() method that is found in the Calculator.aspx.js file: function createSilverlight() { Silverlight.createObjectEx({ source: “CalcPage.xaml”, parentElement: document.getElementById( “SilverlightControlHost”), id: “SilverlightControl”, properties: { width: “0”, height: “0”, version: “1.1”, enableHtmlAccess: “true”, isWindowless: “true” }, events: {} }); }The naming pattern of Calculator.aspx.js is just a convenience used for identifying the JavaScript file for a page; this code is not compiled or executed on the server. You will see that the enableHtmlAccess parameter has been set to “true”. And yes, that really is true inside quotes. Unless you set enableHtmlAccess to “true”, any attempt to interact with HTML elements on the page will yield the dialog seen in Figure 2. Figure 2: Errors happen unless enableHtmlAccess is set to “true” There are a couple of other parameters of interest. isWindowless is also set to “true” because we desire no window at all, allowing HTML elements to lie atop the Silverlight control. This is not necessarily important for the calculator, as you might also have noticed that its height and width are both set to “0”. However, if you’re writing a page that has HTML overlaid onto the same UI space as your Silverlight content, this needs to be set. Finally, you’ll have seen the source parameter that specifies a XAML file, CalcPage.xaml. As was discussed in the previous articles, XAML is typically used to describe the user interface of a Silverlight application. However, the user interface for the calculator is written in pure HTML, so why do I need a XAML file? Let’s find out. Hooking the Loaded eventThe answer is that the XAML will be used to generate half of a class that will then be instantiated in the Silverlight control, in order to provide the root element. This will typically be a Canvas onto which we’d create a UI, but in this case it’s sneakily set to be invisible and occupy no width and height. What’s great about this Canvas, though, it that it provides us with a hook – the Loaded event – that we can use to connect up event handlers for the client-side controls. Here’s the content of CalcPage.xaml: <Canvas x:Name=”parentCanvas” xmlns=” client/2007” xmlns:x=” winfx/2006/xaml” Loaded=”Page_Loaded” x:Class= ”CalcLibrary.CalcPage; assembly= ClientBin/ CalcLibrary.dll” Width=”0” Height=”0” Background=”White” Visibility= ”Collapsed” > </Canvas>This also highlights one other interesting aspect of working with Silverlight 1.1: how does it know where to download the code for the application from? As you can see from the x:Class attribute, it’s expecting the code for the class to be placed in an assembly called CalcLibrary.dll, which is stored within a ClientBin folder. It’s time to see where these come from. Silverlight class librariesI used Microsoft Visual Studio 2008 Beta 2, with the Microsoft Silverlight 1.1 alpha (September Refresh), to build the Silverlight Calculator, and I would recommend that this as the platform of choice over Microsoft Visual Studio 2005 if you’re working with Silverlight 1.1. The solution consists of two projects: a completely bog standard ASP.NET Web Site, and a Silverlight Class Library, as shown in Figure 3. Figure 3: The complete SilverCalc solution Undoubtedly, the easiest way to build a Silverlight 1.1 application, especially when integrating them into existing Web sites, is to use a Silverlight Class Library project. Visual Studio 2008 enables you to link them into the Web site simply by right clicking on the site’s folder in Solution Explorer and selecting Add Silverlight Link… from the resulting menu, as shown in Figure 4. Figure 4: Adding a link to a Silverlight project When you add the link, Visual Studio 2008 will create the ClientBin folder for you and copy the built Silverlight Class Library into it. It will also copy any XAML files across into the Web Site so that they can be downloaded as needed. The important thing to remember with this is that you should never edit the XAML files that have been copied to the Web Site, because they are re-copied across after each build; always work on the file in the Silverlight Class Library project. Client-side C#It’s finally time to get down and dirty with the code for the calculator. First up we have the simple calculator implementation class: namespace CalcLibrary { public class Calc { public double CurrentValue { get; set; } public void Add(double val) { CurrentValue += val; } public void Subtract( double val) { CurrentValue -= val; } public void Divide(double val) { CurrentValue /= val; } public void Multiply( double val) { CurrentValue *= val; } public void Reset() { CurrentValue = 0.0d; } } }As you can see, the Calc type simply maintains a current value and allows you to perform a number of arithmetic operations on that value. It is, however, written in C#, utilising features such as the new C# 3.0 simplified property mechanism. I am confident that you will let your imagination run riot with the possibilities of more advanced code than this. The real fun comes with the code behind the CalcPage.xaml file, selected highlights of which are shown in the following C# code which bridges to HTML elements: namespace CalcLibrary { public partial class CalcPage : Canvas { Calc theCalculator = new Calc(); // Flag to indicate that an // operation button was the // last button clicked bool cmdClicked = true; public void Page_Loaded( object o, EventArgs e) { // Required to initialize // variables InitializeComponent(); SetupOperationEventHandlers(); SetupDigitEventHandlers(); } private void SetupOperationEventHandlers() { AttachClickEvent( “btnAdd”, Add ); // Add event handlers for // other operation buttons … } void AttachClickEvent( string control, EventHandler<HtmlEventArgs> target ) { HtmlDocument document = HtmlPage.Document; HtmlElement elem = document.GetElementByID( control); elem.AttachEvent( “onclick”, target ); } void Add( object sender, HtmlEventArgs e ) { cmdClicked = true; theCalculator.Add( GetValue() ); SetTextValue( theCalculator.CurrentValue ); } // Methods for Set, Subtract, // Divide, Multiply not shown // for brevity double GetValue() { HtmlDocument doc = HtmlPage.Document; HtmlElement txt = doc.GetElementByID( “txtValue”); string val = txt.GetAttribute( “value” ); if( val == null ) val = “”; return Double.Parse( val ); } private void SetTextValue( double value ) { HtmlDocument doc = HtmlPage.Document; HtmlElement txt = doc.GetElementByID( “txtValue”); txt.SetAttribute( “value”, value != 0.0 ? value.ToString() : “” ); txt.SetStyleAttribute( “color”, value < 0.0 ? “#880000” : “#000000” ); } private void SetupDigitEventHandlers() { // attaches an onclick // handler for each digit // button } void DigitPressed( object sender, HtmlEventArgs e ) { // manipulates the value of // the txtValue control // when a digit button is // clicked } } }As you can see from the highlighted sections, there are really only a few important steps to follow in order to program against an HTML element using C#, namely: - The key bridging link between our managed Silverlight code and the HTML elements on the page is provided by the System.Windows.Browser.HtmlPage type, which exposes the current document through its static Document property. Using this is always the start point for hooking into the DOM. - From the System.Windows.Browser.HtmlDocument object that is returned from step 1, you can locate any HTML element using the GetElementById() method. This model should be very familiar for existing JavaScript developers, and isn’t a million miles away from server-side ASP.NET techniques such as FindControl(). GetElementByID() will return an HtmlElement object reference which you can use to manipulate the object. - To read the value of an attribute on the HtmlElement object, simply call its GetAttribute() method, specifying the attribute’s name. This will return you a string that you can parse into the appropriate CLR type. - To set the value of an attribute on the HtmlElement object, call its SetAttribute() method, this time passing in the name of the attribute and the value that you would like it to be set to. Note that if you want to adjust the element’s style, use SetStyleAttribute, which takes the name of the CSS style property that you want to configure. - Hooking up event handlers is done using the AttachEvent() method. This takes the name of the client side event, and a delegate of type EventHandler or EventHandler<HtmlEventArgs>. I’d always recommend the second approach, as the HtmlEventArgs parameter that you then receive is full of useful properties which can make programming the event handler much easier. You can, of course, disconnect event handlers using the DetachEvent() method. Clearly, the above code is showing you that you need to have a basic understanding of the HTML elements and what they offer, but with the power of an IntelliSense-enabled IDE behind you, this is not too onerous. What is so impressive, though, about the way Silverlight works is that all of the client interactivity code is written in managed C#. Thus I have no worries about how events bubble in the browser, or how to write classes in JavaScript. This is powerful medicine! Other useful featuresThere are many other aspects of Silverlight that make developing for, and working with, the browser easier. The System.Windows.Browser namespace is full of useful types to help with your code’s integration into the page. For example, the BrowserInformation class provides, as its name suggests, information about the user agent, whether cookies are supported and so on. The HttpUtility class is also useful if you need to encode or decode data for use in Urls. And if you’re like me, it’s unlikely that your code is going to be perfect first time. Fortunately, Visual Studio 2008 offers full debug support for your Silverlight code, so you can set breakpoints and use the familiar Diagnostics mechanisms that you are used to when tracking down bugs. Note that engaging managed debugging in Visual Studio 2008 will disable script debugging, but the good news is that as you’re now using managed code anyway, who cares? ConclusionSilverlight 1.1 is not just about pretty, XAML-based user interfaces. Sure, it’s highly likely that the majority of Silverlight 1.1 applications will utilise those features, but it is not exclusively about rich media content. It affords you the opportunity to interact with HTML elements – in fact any scriptable object – using managed code. You might therefore feel that this is an easier (and richer) path to, dare I say it, the next generation of AJAX-enabled applications, with C# or Visual Basic replacing JavaScript. There’s also no denying that Silverlight offers a deeper programming library, and one that is more familiar to many .NET developers, than that offered by JavaScript. Visual Studio 2008 in turn provides a very capable development and debugging environment that is hard to match in the scripting world. Of course, this comes at a cost. Silverlight requires a plug-in and it doesn’t have quite the reach of a pure JavaScript solution. Somewhat cheekily, this article was entitled “Silverlight as a better JavaScript”. Ultimately, using Silverlight with managed code on the client is not going to replace JavaScript completely. Even without the fancy visuals, though, Silverlight is a powerful client-side technology that complements existing Web coding techniques superbly. And at the end of the day, you just have to love the fact that you can use your familiar .NET types, with your favourite .NET language, to write your next generation Web 3.0 applications. Now that’s cool. Dave Wheeler is a freelance consultant and software author, working primarily with Microsoft .NET. He delivers training courses with DevelopMentor, helps moderate various Microsoft forums, and is a regular speaker at DevWeek and other conferences (he is presenting a number of talks at DevWeek 2008 in March). He can be contacted at david.wheeler@virtuadev.com. DisclaimerTo get your hands on Silverlight 1.1 you need to head over to the Silverlight site and then follow the links to get the correct runtime version/SDK for your operating system. What you’ll notice is that version 1.1 is currently only an alpha “release”. Consequently, it might be liable to a significant amount of change prior to release, so there’s no guarantee that everything you’ve seen will work with the next drop of the software.
https://www.developerfusion.com/article/84465/silverlight-as-a-better-javascript/
CC-MAIN-2018-51
refinedweb
3,083
53.61
This page provides an overview of the main aspects of Google Kubernetes Engine (GKE) networking. This information is useful to those who are just getting started with Kubernetes, as well as experienced cluster operators or application developers who need more knowledge about Kubernetes networking in order to better design applications or configure Kubernetes workloads. Kubernetes lets you declaratively define how your applications are deployed, how applications communicate with each other and with the Kubernetes control plane, and how clients can reach your applications. This page also provides information about how GKE configures Google Cloud services, where it is relevant to networking. When you use Kubernetes to orchestrate your applications, it's important to change how you think about the network design of your applications and their hosts. With Kubernetes, you think about how Pods, Services, and external clients communicate, rather than thinking about how your hosts or virtual machines (VMs) are connected. Kubernetes' advanced software-defined networking (SDN) enables packet routing and forwarding for Pods, Services, and nodes across different zones in the same regional cluster. Kubernetes and Google Cloud also dynamically configure IP filtering rules, routing tables, and firewall rules on each node, depending on the declarative model of your Kubernetes deployments and your cluster configuration on Google Cloud. Prerequisites This page uses terminology related to the Transport, Internet, and Application layers of the Internet protocol suite, including HTTP, and DNS, but you don't need to be an expert on these topics. In addition, you may find this content easier to understand if you have a basic understanding of Linux network management concepts and utilities such as iptables rules and routing. Terminology related to IP addresses in Kubernetes The Kubernetes networking model relies heavily on IP addresses. Services, Pods, containers, and nodes communicate using IP addresses and ports. Kubernetes provides different types of load balancing to direct traffic to the correct Pods. All of these mechanisms are described in more detail later in this topic. Keep the following terms in mind as you read: - ClusterIP: The IP address assigned to a Service. In other documents, it might be called the "Cluster IP". This address is stable for the lifetime of the Service, as discussed in the Services section in this topic. - Pod IP: The IP address assigned to a given Pod. This is ephemeral, as discussed in the Pods section in this topic. - Node IP: The IP address assigned to a given node. Networking inside the cluster This section discusses networking within a Kubernetes cluster, as it relates to IP allocation, Pods, Services, DNS, and the control plane. IP allocation Kubernetes uses various IP ranges to assign IP addresses to nodes, Pods, and Services. - Each node has an IP address assigned from the cluster's Virtual Private Cloud (VPC) network. This node IP provides connectivity from control components like kube-proxyand the kubeletto the Kubernetes API server. This IP is the node's connection to the rest of the cluster. Each node has a pool of IP addresses that GKE assigns Pods running on that node (a /24 CIDR block by default). You can optionally specify the range of IPs when you create the cluster. The Flexible Pod CIDR range feature lets you reduce the size of the range for Pod IPs for nodes in a given node pool. Each Pod has a single IP address assigned from the Pod CIDR range of its node. This IP address is shared by all containers running within the Pod, and connects them to other Pods running in the cluster. Each Service has an IP address, called the ClusterIP, assigned from the cluster's VPC network. You can optionally customize the VPC network when you create the cluster. For public clusters on GKE versions 1.22 and later created on or after October 28, 2021, each control plane is assigned a private IP address. This is not applicable to clusters running on legacy networks. For more information, see VPC-native clusters. Pods In Kubernetes, a Pod is the most basic deployable unit within a Kubernetes cluster. A Pod runs one or more containers. Zero or more Pods run on a node. Each node in the cluster is part of a node pool. In GKE, these nodes are virtual machines, each running as an instance in Compute Engine. Pods can also attach to external storage volumes and other custom resources. This diagram shows a single node running two Pods, each attached to two volumes. When Kubernetes schedules a Pod to run on a node, it creates a network namespace for the Pod in the node's Linux kernel. This network namespace connects the node's physical network interface, such as eth0, with the Pod using a virtual network interface, so that packets can flow to and from the Pod. The associated virtual network interface in the node's root network namespace connects to a Linux bridge that allows communication among Pods on the same node. A Pod can also send packets outside of the node using the same virtual interface. Kubernetes assigns an IP address (the Pod IP) to the virtual network interface in the Pod's network namespace from a range of addresses reserved for Pods on the node. This address range is a subset of the IP address range assigned to the cluster for Pods, which you can configure when you create a cluster. A container running in a Pod uses the Pod's network namespace. From the container's point of view, the Pod appears to be a physical machine with one network interface. All containers in the Pod see this same network interface. Each container's localhost is connected, through the Pod, to the node's physical network interface, such as eth0. Note that this connectivity differs drastically depending on whether you use GKE's native Container Network Interface (CNI) or choose to use Calico's implementation by enabling Network policy when you create the cluster. If you use GKE's CNI, one end of the Virtual Ethernet Device (veth) pair is attached to the Pod in its namespace, and the other is connected to the Linux bridge device cbr0.1 In this case, the following command shows the various Pods' MAC addresses attached to cbr0: arp -n Running the following command in the toolbox container shows the root namespace end of each veth pair attached to cbr0: brctl show cbr0 If Network Policy is enabled, one end of the veth pair is attached to the Pod and the other to eth0. In this case, the following command shows the various Pods' MAC addresses attached to different veth devices: arp -n Running the following command in the toolbox container shows that there is not a Linux bridge device named cbr0: brctl show The iptables rules that facilitate forwarding within the cluster differ from one scenario to the other. It is important to have this distinction in mind during detailed troubleshooting of connectivity issues. By default, each Pod has unfiltered access to all the other Pods running on all nodes of the cluster, but you can limit access among Pods. Kubernetes regularly tears down and recreates Pods. This happens when a node pool is upgraded, when changing the Pod's declarative configuration or changing a container's image, or when a node becomes unavailable. Therefore, a Pod's IP address is an implementation detail, and you should not rely on them. Kubernetes provides stable IP addresses using Services. Services In Kubernetes, you can assign arbitrary key-value pairs called labels to any Kubernetes resource. Kubernetes uses labels to group multiple related Pods into a logical unit called a Service. A Service has a stable IP address and ports, and provides load balancing among the set of Pods whose labels match all the labels you define in the label selector when you create the Service. The following diagram shows two separate Services, each of which is comprised of multiple Pods. Each of the Pods in the diagram has the label app=demo, but their other labels differ. Service "frontend" matches all Pods with both app=demo and component=frontend, while Service "users" matches all Pods with app=demo and component=users. The Client Pod does not match either Service selector exactly, so it is not a part of either Service. However, the Client Pod can communicate with either of the Services because it runs in the same cluster. Kubernetes assigns a stable, reliable IP address to each newly-created Service (the ClusterIP) from the cluster's pool of available Service IP addresses. Kubernetes also assigns a hostname to the ClusterIP, by adding a DNS entry. The ClusterIP and hostname are unique within the cluster and do not change throughout the lifecycle of the Service. Kubernetes only releases the ClusterIP and hostname if the Service is deleted from the cluster's configuration. You can reach a healthy Pod running your application using either the ClusterIP or the hostname of the Service. At first glance, a Service may seem to be a single point of failure for your applications. However, Kubernetes spreads traffic as evenly as possible across the full set of Pods, running on many nodes, so a cluster can withstand an outage affecting one or more (but not all) nodes. Kube-Proxy Kubernetes manages connectivity among Pods and Services using the kube-proxy component. This is deployed as a static Pod on each node by default. Any GKE cluster running 1.16 or later has a kube-proxy DaemonSet. The DaemonSet selects nodes which are running a GKE version between 1.16.0 and 1.16.8-gke.13. If the cluster does not have any nodes running these versions, the DaemonSet shows 0 Pods. kube-proxy, which is not an in-line proxy, but an egress-based load-balancing controller, watches the Kubernetes API server and continually maps the ClusterIP to healthy Pods by adding and removing destination NAT (DNAT) rules to the node's iptables subsystem. When a container running in a Pod sends traffic to a Service's ClusterIP, the node selects a Pod at random and routes the traffic to that Pod. When you configure a Service, you can optionally remap its listening port by defining values for port and targetPort. - The portis where clients reach the application. - The targetPortis the port where the application is actually listening for traffic within the Pod. kube-proxy manages this port remapping by adding and removing iptables rules on the node. This diagram illustrates the flow of traffic from a client Pod to a server Pod on a different node. The client connects to the Service at 172.16.12.100:80. The Kubernetes API server maintains a list of Pods running the application. The kube-proxy process on each node uses this list to create an iptables rule to direct traffic to an appropriate Pod (such as 10.255.255.202:8080). The client Pod does not need to be aware of the topology of the cluster or any details about individual Pods or containers within them. DNS GKE provides the following managed cluster DNS options to resolve service names and external names: - kube-dns: a cluster add-on that is deployed by default in all GKE clusters. - Cloud DNS: a cloud-managed cluster DNS infrastructure that replaces kube-dns in the cluster. GKE also provides NodeLocal DNSCache as an optional add-on with kube-dns or Cloud DNS to improve cluster DNS performance. To learn more about how GKE provides DNS, see Service discovery and DNS. Control plane In Kubernetes, the control plane manages the control plane processes, including the Kubernetes API server. How you access the control plane depends on the version of your GKE cluster and the type of your cluster. For all private clusters, the control plane has both a private IP address and a public IP address. For public clusters on all GKE versions created before October 28, 2021 and all public clusters running on legacy networks, the control plane has a public IP address by default. For public clusters on GKE versions 1.22 and later created on or after October 28, 2021, each control plane is assigned a private IP address from the cluster node subnet. These clusters use Private Service Connect. Networking outside the cluster This section explains how traffic from outside the cluster reaches applications running within a Kubernetes cluster. This information is important when designing your cluster's applications and workloads. You've already read about how Kubernetes uses Services to provide stable IP addresses for applications running within Pods. By default, Pods do not expose an external IP address, because kube-proxy manages all traffic on each node. Pods and their containers can communicate freely, but connections outside the cluster cannot access the Service. For instance, in the previous illustration, clients outside the cluster cannot access the frontend Service using its ClusterIP. GKE provides three different types of load balancers to control access and to spread incoming traffic across your cluster as evenly as possible. You can configure one Service to use multiple types of load balancers simultaneously. - External load balancers manage traffic coming from outside the cluster and outside your Google Cloud VPC network. They use forwarding rules associated with the Google Cloud network to route traffic to a Kubernetes node. - Internal load balancers manage traffic coming from within the same VPC network. Like external load balancers, they use forwarding rules associated with the Google Cloud network to route traffic to a Kubernetes node. - HTTP(S) load balancers are specialized external load balancers used for HTTP(S) traffic. They use an Ingress resource rather than a forwarding rule to route traffic to a Kubernetes node. When traffic reaches a Kubernetes node, it is handled the same way, regardless of the type of load balancer. The load balancer is not aware of which nodes in the cluster are running Pods for its Service. Instead, it balances traffic across all nodes in the cluster, even those not running a relevant Pod. On a regional cluster, the load is spread across all nodes in all zones for the cluster's region. When traffic is routed to a node, the node routes the traffic to a Pod, which may be running on the same node or a different node. The node forwards the traffic to a randomly chosen Pod by using the iptables rules that kube-proxy manages on the node. In the following diagram, the network load balancer directs traffic to the middle node, and the traffic is redirected to a Pod on the first node. When a load balancer sends traffic to a node, the traffic might get forwarded to a Pod on a different node. This requires extra network hops. If you want to avoid the extra hops, you can specify that traffic must go to a Pod that is on the same node that initially receives the traffic. To specify that traffic must go to a Pod on the same node, set externalTrafficPolicy to Local in your Service manifest: apiVersion: v1 kind: Service metadata: name: my-lb-service spec: type: LoadBalancer externalTrafficPolicy: Local selector: app: demo component: users ports: - protocol: TCP port: 80 targetPort: 8080 When you set externalTrafficPolicy to Local, the load balancer sends traffic only to nodes that have a healthy Pod that belongs to the Service. The load balancer uses a health check to determine which nodes have the appropriate Pods. External load balancer If your Service needs to be reachable from outside the cluster and outside your VPC network, you can configure your Service as a LoadBalancer, by setting the Service's type field to Loadbalancer when defining the Service. GKE then provisions a network load balancer in front of the Service. The network load balancer is aware of all nodes in your cluster and configures your VPC network's firewall rules to allow connections to the Service from outside the VPC network, using the Service's external IP address. You can assign a static external IP address to the Service. Visit Configuring domain names with static IP addresses for more information. To learn more about firewall rules, see Automatically created firewall rules. Technical details When using the external load balancer, arriving traffic is initially routed to a node using a forwarding rule associated with the Google Cloud network. After the traffic reaches the node, the node uses its iptables NAT table to choose a Pod. kube-proxy manages the iptables rules on the node. Internal load balancer For traffic that needs to reach your cluster from within the same VPC network, you can configure your Service to provision an internal TCP/UDP load balancer. The internal TCP/UDP load balancer chooses an IP address from your cluster's VPC subnet instead of an external IP address. Applications or services within the VPC network can use this IP address to communicate with Services inside the cluster. Technical details Internal load balancing functionality is provided by Google Cloud. When the traffic reaches a given node, that node uses its iptables NAT table to choose a Pod, even if the Pod is on a different node. kube-proxy manages the iptables rules on the node. For more information about internal load balancers, see Using an internal TCP/UDP load balancer. HTTP(S) load balancer Many applications, such as RESTful web service APIs, communicate using HTTP(S). You can allow clients external to your VPC network to access this type of application using a Kubernetes Ingress resource. An Ingress resource allows you to map hostnames and URL paths to Services within the cluster. When using an HTTP(S) load balancer, you must configure the Service to use a NodePort, as well as a ClusterIP. When traffic accesses the Service on a node's IP at the NodePort, GKE routes traffic to a healthy Pod for the Service. You can specify a NodePort or allow GKE to assign a random unused port. When you create the Ingress resource, GKE provisions an external HTTP(S) load balancer in the Google Cloud project. The load balancer sends a request to a node's IP address at the NodePort. After the request reaches the node, the node uses its iptables NAT table to choose a Pod. kube-proxy manages the iptables rules on the node. This Ingress definition routes traffic for demo.example.com to a Service named frontend on port 80, and demo-backend.example.com to a Service named users on port 8080. apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: demo spec: rules: - host: demo.example.com http: paths: - backend: service: name: frontend port: number: 80 - host: demo-backend.example.com http: paths: - backend: service: name: users port: number: 8080 Visit GKE Ingress for HTTP(S) load balancing for more information. Technical details When you create an Ingress object, the GKE Ingress controller configures a Google Cloud HTTP(S) load balancer according to the rules in the Ingress manifest and the associated Service manifests. The client sends a request to the HTTP(S) load balancer. The load balancer is an actual proxy; it chooses a node and forwards the request to that node's NodeIP: NodePort combination. The node uses its iptables NAT table to choose a Pod. kube-proxy manages the iptables rules on the node. Limiting connectivity between nodes Creating ingress or egress firewall rules targeting nodes in your cluster may have adverse effects. For example, applying egress deny rules to nodes in your cluster could break functionality such as NodePort and kubectl exec. Limiting connectivity to Pods and Services By default, all Pods running within the same cluster can communicate freely. However, you can limit connectivity within a cluster in different ways, depending on your needs. Limiting access among Pods You can limit access among Pods using a network policy. Network policy definitions allow you to restrict the ingress and egress of Pods based on an arbitrary combination of labels, IP ranges, and port numbers. By default, there is no network policy, so all traffic among Pods in the cluster is allowed. As soon as you create the first network policy in a namespace, all other traffic is denied. Visit Network policies for more details about how to specify the policy itself. After creating a network policy, you must explicitly enable it for the cluster. Visit Configuring network policies for applications for more information. Limiting access to an external load balancer If your Service uses an external load balancer, traffic from any external IP address can access your Service by default. You can restrict which IP address ranges can access endpoints within your cluster, by configuring the loadBalancerSourceRanges option when configuring the Service. You can specify multiple ranges, and you can update the configuration of a running Service at any time. The kube-proxy instance running on each node configures that node's iptables rules to deny all traffic that does not match the specified loadBalancerSourceRanges. No VPC firewall rule is created. Limiting access to an HTTP(S) load balancer If your service uses the HTTP(S) load balancer, you can use a Google Cloud Armor security policy to limit which external IP addresses can access your Service and which responses to return when access is denied because of the security policy. You can configure Cloud Logging to log information about these interactions. If a Google Cloud Armor security policy is not fine-grained enough, you can enable the Identity-Aware Proxy on your endpoints to implement user-based authentication and authorization for your application. Visit the detailed tutorial for configuring IAP for more information. Known issues Containerd-enabled node unable to connect to 172.17/16 range A node VM with containerd enabled cannot connect to a host that has an IP within 172.17/16. For more information, see Containerd known issues.
https://cloud.google.com/kubernetes-engine/docs/concepts/network-overview?hl=hr&skip_cache=false
CC-MAIN-2021-49
refinedweb
3,632
53
What will we cover? In this tutorial we will cover. - What is a backtesting strategy? - How to measure the performance of a backtesting strategy? - How to implement a backtesting strategy with Pandas? What is a backtesting strategy? In a trading strategy backtesting seeks to estimate the performance of a strategy or model if it had been employed during a past period (source). The way to analyze the performance of a strategy is to compare it with return, volatility, and max drawdown. Other metrics can also be used, but for this tutorial we will use these. Step 1: Read data from Yahoo! Finance API with Pandas Datareader Let’s get started by importing a few libraries and retrieve some data from Yahoo! Finance API with Pandas Datareader. import pandas as pd import pandas_datareader as pdr import datetime as dt import numpy as np start = dt.datetime(2010, 1, 1) data = pdr.get_data_yahoo("AAPL", start) Which will read data for the Apple ticker (AAPL) since 2010. Below is shown the head of data. High Low Open Close Volume Adj Close Date 2010-01-04 7.660714 7.585000 7.622500 7.643214 493729600.0 6.583586 2010-01-05 7.699643 7.616071 7.664286 7.656429 601904800.0 6.594968 2010-01-06 7.686786 7.526786 7.656429 7.534643 552160000.0 6.490066 2010-01-07 7.571429 7.466071 7.562500 7.520714 477131200.0 6.478067 2010-01-08 7.571429 7.466429 7.510714 7.570714 447610800.0 6.521136 Step 2: Calculate signals for a simple strategy The simple strategy we will use is moving average of period 5 and 20. When the moving average of the Adj Close price of 5 days is above the moving average of 20 days, we go long (buy and hold) otherwise short (sell). This can be calculated as follows. data['Signal'] = data['Adj Close'].rolling(5).mean() - data['Adj Close'].rolling(20).mean() data['Position'] = (data['Signal'].apply(np.sign) + 1)/2 This results in a Signal line, which is the differences of the two moving averages. When the signal line is positive our position is 1 (buy and hold) otherwise 0 (sell). High Low Open ... Adj Close Signal Position Date ... 2021-02-26 124.849998 121.199997 122.589996 ... 121.260002 -7.610835 0.0 2021-03-01 127.930000 122.790001 123.750000 ... 127.790001 -7.054179 0.0 2021-03-02 128.720001 125.010002 128.410004 ... 125.120003 -6.761187 0.0 2021-03-03 125.709999 121.839996 124.809998 ... 122.059998 -6.782757 0.0 2021-03-04 123.599998 118.620003 121.750000 ... 120.129997 -6.274249 0.0 The reason why we want long to 1 and short to be 0 is for computational reasons, which will be clear soon. Step 3: Remove unnecessary data columns and rows To have a cleaner dataset we will clean it up. data.drop(['High', 'Low', 'Open', 'Volume', 'Close'], axis=1, inplace=True) data.dropna(inplace=True) Where drop removes columns not needed and dropna removes rows with NaN. The inplace=True is simply to apply it on the DataFrame. Adj Close Signal Position Date 2010-02-01 5.990476 -0.217986 0.0 2010-02-02 6.025239 -0.252087 0.0 2010-02-03 6.128909 -0.282004 0.0 2010-02-04 5.908031 -0.297447 0.0 2010-02-05 6.012933 -0.253271 0.0 Step 4: Calculate the return of the strategy To calculate the return we will use log returns as we will see is an advantage. Then we use the Position, but we shift it by 1, as we assume we first react on a position the day after the signal. data['Log return'] = np.log(data['Adj Close']/data['Adj Close'].shift()) data['Return'] = data['Position'].shift(1)*data['Log return'] This result in the following. Adj Close Signal Position Log return Return Date 2021-02-26 121.260002 -7.610835 0.0 0.002229 0.0 2021-03-01 127.790001 -7.054179 0.0 0.052451 0.0 2021-03-02 125.120003 -6.761187 0.0 -0.021115 -0.0 2021-03-03 122.059998 -6.782757 0.0 -0.024761 -0.0 2021-03-04 120.129997 -6.274249 0.0 -0.015938 -0.0 Now the additive advance of log returns comes in handy. Remember that that we can add up log returns to calculate the final return. For details I refer to this. Hence, we get that the return can be calculated as follows. data[['Log return', 'Return']].cumsum().apply(np.exp) Resulting in the following. Log return Return Date 2021-02-26 20.242133 7.29214 2021-03-01 21.332196 7.29214 2021-03-02 20.886489 7.29214 2021-03-03 20.375677 7.29214 2021-03-04 20.053499 7.29214 Using a bit calculations. np.exp(data[['Log return', 'Return']].mean()*252) We get. Log return 1.310917 Return 1.196485 dtype: float64 Which tells us that the annualized return of our strategy giver 19.6485% return. A buy and hold strategy would give 31.0917% The natural question is: What did we gain with our strategy? Step 5: Evaluating our strategy If we compute the volatility comparing the buy-and-hold strategy with ours. The volatility of a stock can be calculated in many ways. Here we will use the standard deviation. For other measures refer to Investpedia. data[['Log return', 'Return']].std()*252**.5 Which gives the annualized standard deviation. Log return 0.283467 Return 0.188044 dtype: float64 Hence, the gain from our strategy is a less volatile strategy. Recent Comments
https://www.learnpythonwithrune.org/page/2/
CC-MAIN-2021-25
refinedweb
950
79.56
Hi folks! Today we on the devangel crew wanted to sprinkle the 12 Hacks of Christmas with some holiday cheer. The following hack started out as a Santa location hack. Then we found out that there is no such thing as a ‘Santa API’, which means first we’re going to have build that API and then maybe next year we’ll build a tool for tracking Santa in real-time via Twilio. Instead Jarod, one of the parents on our team, put together a fun little command-line tool for sending secret messages to festive little recipients. Let us know what you think on Twitter with the hashtag #12HacksOfChristmas. Santa’s Helper – Command Line tool using Ruby-gem, Thor and Twilio Growing up in my house during Christmas was a delight. Every year we’d put out a tray of cookies on Christmas Eve, hang the stockings and try to sleep quietly upstairs as Santa visited our house and delivered gifts. It has always been a special night but now that I am a new parent I appreciate the magic of Santa even more. When I began thinking about how parents like myself might want to interact with the North Pole I started by making a list of requirements and concerns: - It needed to seem magical (simply texting a phone number might not be magical) - It needed to be hard to find for a child (ie; not a website) - It shouldn’t be a phone-to-phone hack (since we don’t want Mom’s phone chiming when little johnny thinks he is talking to the North Pole). - It needs to be extendable by the community In the end I decided the best implementation would be a ruby-gem command-line tool where hacker parents could interact discreetly with the North Pole. This would allow for magical experiences, is pretty hard for a child to stumble upon and gives us a tool that is easily extendable by the community. Luckily I had been looking for an excuse to build something using the Thor gem… and a command line interface to Santa’s helpers seemed the perfect console for creating magic :) In this post I’ll show you how to build the SantaCLI from start to finish, but thats not my real goal. The real goal of this post is to equip you with enough information to inspire you to build an experience customized for your own family. So check out the code and start tweaking! Demo: Text +1 860-947-HOHO(4646) to ping the North Pole and see what Santa is up to. The Code: santas_twilio_helper on github santas_twilio_helper on ruby-gems Twilio: Ruby helper library, REST API, Twilio Number, Thor The North Pole is listening: command list When I sat down to create my command line interface to the North Pole I wanted to make sure it was a two way channel. I wanted to make sure the elves had a way to communicate with my kids while also giving me a way of sending messages as an ambassador of Santa. I also wanted to give all of you hacker Moms and Dads a foundation that you could build on. Here’s the list of commands I came up with: $ santa begin: this command should register you as a parent, your children’s names and your zip code. This gives you and the elves access to certain details (variables) down the road. $ santa ping: ping the North Pole which should send a random message status (SMS) of what Santa is up to. This could be extended to give location updates on Christmas Day, or to send MMS pics of Santa selfies. $ santa telegraph: this allows mom or dad to send a message as one of Santa’s helpers. This should have an option for a time delay. Great, now let me show you how I built this beauty. Before we move on, feel free to grab the code from github and follow along: Creating a Ruby gem that used Twilio Feel free to skip this section if you just want to install my santa gem and play around with it, for the rest of us, here is a quick rundown of how I chose to create the gem. There are a few ways to create a ruby gem, including by following the ruby-gem docs. I decided to use Bundler to bundle up the gem. The biggest benefit to using Bundler is that it can generate all of the files needed for a gem and with a few TODO notes to help you fill it out. Additionally you get some handy rake commands for building and releasing the gem when you’re finished. First you need to install Bundler: $ gem install bundler Next you can generate the gem: $ bundle gem santa_helper create santa_helper/Gemfile create santa_helper/Rakefile create santa_helper/LICENSE.txt create santa_helper/README.md create santa_helper/.gitignore create santa_helper/santa_helper.gemspec create santa_helper/lib/santa_helper.rb create santa_helper/lib/santa_helper/version.rb Initializating git repo in <path-to-gem>/santa_helper Next you can open the directory you just created and fill out all of the TODOs in the .gemspec file. # coding: utf-8 lib = File.expand_path('../lib', __FILE__) $LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib) require 'santa_helper/version' Gem::Specification.new do |spec| spec.name = "santa_helper" spec.version = SantaHelper::VERSION spec.authors = ["Jarod Reyes"] spec.email = ["jarodreyes@gmail.com"].7" spec.add_development_dependency "rake", "~> 10.0" end Finally you need to make sure Twilio and Thor are available in your gem. This means requiring them in two places; the .gemspec and your main application. First you should add them to the .gemspec file: spec.add_dependency 'thor', '~> 0.18' spec.add_dependency 'twilio-ruby', '~> 3.11' Eventually you’ll need to require them in the application.rb file but that’s getting ahead of ourselves. In the santas_twilio_helper gem this has all been done for you so you can just install the gem now and build on top of it. $ gem install santas_twilio_helper Putting the fruit in the fruitcake: building the CLI using Thor. Thor is a nifty little library. By including it we get a lot of built-in functionality that will make our minimal code act like a full fledged CLI. To give you an example of what I mean type this into the ‘santas_twilio_helper.rb’ file: desc 'hohoho', 'Wake up the big red man' def begin puts 'Ho Ho Ho! Merry Christmas to all and to all a good night!' end Now because we are using Bundler we can execute the command line tool using the bundle command: $ bundle exec bin/santa hohoho That’s all it takes to create a CLI using Thor, a set of descriptions and methods that execute code in the console. If you’ve installed the santas_twilio_helper gem you can see a bit of Thor’s magic by typing santa help: Using Thor The simplest way to use Thor is by creating some commands within a Thor class that are defined by their description. However, Thor also includes Actions, which are modules that interact with our local environment to give us some cool functionality. To illustrate this example here is the begin command that intakes some input from the console and writes it to a file. require 'thor' require 'paint' require 'json' require 'twilio-ruby' module SantasTwilioHelper module Cli class Application < Thor # Class constants @@twilio_number = ENV['TWILIO_NUMBER'] @@client = Twilio::REST::Client.new ENV['TWILIO_ACCOUNT_SID'], ENV['TWILIO_AUTH_TOKEN'] include Thor::Actions desc 'begin', 'Register yourself as one of Santas helpers' def begin say("#{Paint["Hi I'm one of Santa's Twilio Elves, and I'm about to deputize you as an ambassador to Santa. To get started I need your name.", :red]}") santa_helper = ask("Parent Name: ") children = [] say("Great Gumdrops. We also need your child's name to verify they are on Santa's list. ") child = ask("Child Name: ") children.push(child) say("Fantastic. You can always add more children by running add_child later.") say("Next I need to know your telephone number so Santa's helpers can get in touch with you.") telephone = ask("#{Paint['Telephone Number: ', :red]}") say("The last thing I need is your city so we can verify we have the correct location for #{child}.") zip_code = ask("#{Paint['Zip Code: ', :blue]}") data = { 'santa_helper' => santa_helper, 'children' => children, 'telephone' => telephone, 'zip_code'=> zip_code } write_file(data) say("#{Paint["Okay you're off to the races. You can type `santa help` at any time to see the list of available commands.", "#55C4C2"]}") end end end end Right away you can see some cool Thor functionality, by using the methods say and ask Thor will pause the shell session to prompt the user for some feedback, wait for a response and then store the response to a variable that our script can reference. I mentioned earlier that Thor includes Actions which allow us to save input to disk among other things. One of these Actions is create_file() which I use in the function below, to save the user input from begin to the filesystem. def write_file(data_hash) create_file "santarc.json", "// Your Santas Helper configuration.\n #{data_hash.to_json}", :force => true end I chose to write the data to a santarc file as opposed to writing to db. If you would like a more secure data store you should probably change this. Next let’s take a look at the ping command since I believe this command has a ton of potential ways it could be extended to do amazing things. # PING will tell us what Santa is up to. desc 'ping', 'See where Santa is right now' def ping file = File.read('messages.json') messages = JSON.parse(file) # TODO: if it is Dec. 25 we should pull from a different set of messages. Location specific # TODO: We should use the zip code in the profile to make sure Santa arrives here last. # For now Santa messages are all non-location based santaMs = messages['SANTA_SNIPPETS'] a = rand(0..(santaMs.length-1)) msg = santaMs[a] puts "sending message..." sendMessage(msg) end This command is pretty straight-forward as it stands, but you may want to extend it based on the TODO messages in the comments. For now it reads messages from the ‘messages.json’ file and pulls a random message. Then we call sendMessage() which actually does the job of sending the SMS with Twilio. Before we can send a message in Ruby using Twilio all we need to do is add ‘require twilio-ruby’ to the top of application.rb def sendMessage(msg) message = @@client.account.messages.create( :from => @@twilio_number, :to => phone, :body => msg ) puts "message sent: #{msg}" end This version of sendMessage() is the simplest way we can send an SMS using Twilio. To send a picture you would simply need to add a :media_url parameter that would point to a url of some media. However on github, you will find a more robust version of sendMessage() that actually appends a greeting and a sign-off from the elves. I’ll quickly mention the ‘telegraph’ command which allows me to send a message as Santa’s Helper with a time delay (in seconds). This means I can create scenarios where I am in the kitchen cooking with the family when the ‘Santa Phone’ mysteriously delivers a custom message. Pretty sneaky ;) Try it yourself (assuming you installed the gem) with the command: $ santa telegraph "Santa just checked his list for the 3rd time, hope you were nice today!" 240 The last bit of know-how I picked up on this journey to a gem was how to build and release the gem. Building and releasing the ruby gem. Before we can use the handy Rake tasks we need to save your ruby-gem credentials to your .gem file. So run: $ curl -u your-rubygems-username > ~/.gem/credentials Once we have done this we can run: $ rake build && rake release Grok’in around the christmas tree So now we have a working CLI to interact with the North Pole, but there are a few things I would challenge you all to do. First this CLI could use a location specific hack. Since neither Google or Norad(Bing) offer an api for Santa’s location this is going to require some work on your end. But if you want to collaborate on a Santa API hit me up and we’ll get started. I would also recommend adding a Christmas day switch to the ‘ping’ command that maybe kicks off an hourly update via SMS of where Santa is located. Hopefully learning how I built this tool has given you a plethora ideas of how to make it even more magical. I look forward to all of the pull requests on github, but until then feel free to ping me on twitter or email and we can talk about how The Santa Clause movie with Tim Allen is actually a cultural gem.
https://www.twilio.com/blog/santas-helper-command-line-tool-ruby-gem-thor-twilio-html
CC-MAIN-2019-51
refinedweb
2,154
70.84
spir wrote: > Le Mon, 09 Mar 2009 18:04:32 -0400, Terry Reedy > <tjreedy at udel.edu> s'exprima ainsi: > >> Lambdas are function-defining expressions used *within* statements >> that give the resulting function object a stock .__name__ of >> '<lambda>'. The syntax could have been augmented to include a real >> name, so the stock-name anonymity is a side-effect of the chosen >> syntax. Possibilities include lambda name(args): expression lambda >> <name> args: expression The latter, assuming it is LL(1) >> parse-able, would even be compatible with existing code and could >> still be added. >> >> Contrarywise, function-defining def statements could have been >> allowed to omit the name. To be useful, the object (with a >> .__name__ such as '<def>', would have to get a default namespace >> binding such as to '_', even in batch mode. > >" ;-) ??? A lambda expression and a def statement both produce function objects. The *only* difference between the objects produced by a lambda expression and the equivalent def statement is that the former gets the stock .__name__ of '<lambda>, which is less useful that a specific name should there be a traceback. The important difference between a function-defining expression and statement is that the former can be used in expression context within statements and the latter cannot. tjr
https://mail.python.org/pipermail/python-ideas/2009-March/003303.html
CC-MAIN-2017-17
refinedweb
211
63.09
Using dotTrace Memory 3.5, I can't get to show source code of my asp.net web app. I set up folder path in View-> Manage Folder Substitution But message in Source View still is: Cannot find PDB file for C:\Windows\..... (you can see at picture) PDB files are in BIN folder and they are correctly compiled. Hi, Please try the following: 1. Search for your website namespace in the "Namespace Tree" (currently you've selected System namespace, which is .net framework, so you don't have access to the sources) 2. Inside your namespace navigate to some of your classes 3. If the pdbs are not found automatically, then in the "Source view" you will see something like "Can not find pdb file ... browse (link)". Press this link and show it path to the sources. That should work. WRB, Ivan Shakhov The problem is that I don't see Namespaces of my web application. Only from NET Framework. Am I wrongly setting up something? Is it because my web application is written in VB.NET and dotTrace only profiles when is written in C sharp? I attach image showing set up of paths and screen with Namespaces tree. Hi, DotTrace works with any CLR language (C#, VB, etc.) Have you tried to set "Start recording immediatelly" checkbox? When you start profiling does the browser open and have you browsed several pages of your webapp? What VS do you use? 2008/2010? Is your project type WebApplication? WBR, Ivan Shakhov My OS is Windows XP (Spanish). Use Visual Studio 2008, Framework 3.5. I'm using dotTrace Memory 3.5. I'm trying to profile a web application. I deinstalled and installed again dotTrace Memory 3.5, with antivirus off. Get the same: only Namespaces from Microsoft Framework 3.5: System and Microosft. Don't get namespaces of my web application. I navigate at least 3 or 4 pages. Problem could be how to fill this fields: Path to web-server root: Virtual path on server: If I fill: Path to web-server root:D:\ProgVS2008\WebResoluciones_08022011\WebResoluciones\bin Virtual path on server:/WebResoluciones I get WebDev.WebServer.exe error as shown in attached image. If I fill: Path to web-server root:C:\"Archivos de programa"\"Archivos comunes"\"Microsoft Shared"\DevServer\9.0\WebDev.WebServer /port:80 /path:"D:\ProgVS2008\WebResoluciones_08022011\WebResoluciones\bin" /vpath:"/WebResoluciones" Virtual path on server:/WebResoluciones I get the second message as shown in attached image. However, I still can navigate my web pages and get memory snapshots, only that I don't see my namespaces. Hi, I don't see the problem in "Path to web-server". You haven't answered about ""Start recording immediatelly" checkbox". Is it posible, that you sent your application privatelly to Ivan.Shakhov at jetbrains.com? I tried VS2008 + WebApp - mt example works fine. Regards, Ivan Shakhov
http://devnet.jetbrains.com/message/5300978
CC-MAIN-2014-41
refinedweb
480
69.99
Snapshot testing is immensely popular for testing React apps or other component-based UIs. However, it’s not exactly drama-free — many people looooove snapshots for their ease of use and ability to quickly bootstrap a testing portfolio, while others feel that the long-term effects of snapshots might be more harmful than they are helpful. At the end of the day, snapshot testing is simply another tool in our tool belt. And while many people may be divided on how and when to use snapshot testing, it’s good to know that it exists and that it’s available when we need it. I’ll be honest about my position on snapshots — I tend to be in the camp that’s less enthusiastic about them. However, I recently came across a situation with some legacy code where it felt like snapshot tests were a perfect match. Using snapshots as a refactoring tool helped me successfully tackle and refactor some tricky code written long before I joined my company. What are snapshot tests? If you’re not familiar with snapshot tests, we’ll do a little refresher. In a snapshot test, a “picture” of your code’s output is taken the first time the test runs. This “picture” gets saved to a text file in your codebase and all subsequent test runs use this picture as a reference — if your code’s output produces an identical snapshot, the test passes. However, if the output is different from the saved snapshot the test fails. Here’s an example of what a snapshot test looks like in Jest: import renderer from "react-test-renderer"; function Test({ message }) { return {message}; } test("renders", () => { const wrapper = renderer.create(<Test message="test" />); expect(wrapper.toJSON()).toMatchSnapshot(); }); After this test runs for the first time it will create a snapshot file that future test runs will use as a reference. The snapshot file would look something like this: // Jest Snapshot v1, exports[`renders 1`] = ` <div> test </div> `; I hadn’t heard about snapshot testing until I started using Jest — I’m not sure if the Jest team invented snapshot testing but they certainly popularized it! At first glance, snapshots are super convenient — instead of writing your own assertions you can just generate tests to see if your code is broken. Why waste time when the computer can automate our problems away? Jest even makes it super easy to automatically fix your snapshots. This means that even if you’ve got a failing test you’re a single keypress away from fixing all of your tests. When snapshot testing isn’t all it’s cracked up to be At first glance, snapshot testing sounds like a dream come true — all I have to do is write one snippet of code to generate snapshots and I’ll have these super-detailed tests “for free”? Take my money already! However, over the past few years that I’ve been working with snapshot testing, I’ve found that snapshots introduce a number of pain points that make them difficult to maintain. And I’m not the only one! For example, this company decided to ditch snapshots and wrote about it. Or consider this tweet: I 👎’d snapshot testing yesterday. Someone DM’d (not @’d!) me a question so here’s my response to why. I’ll blog soon pic.twitter.com/6dmaRpHG2N — Justin Searls (@searls) October 15, 2017 That’s not to say snapshot testing is all bad! After all, every tool has trade-offs, and it’s worth acknowledging the weaknesses of a tool when we’re evaluating using it. Here are a few reasons why I’m not the biggest fan of having snapshots in my testing suites. Snapshots break easily Snapshots are often used to test component trees or large objects. However, since the snapshot takes a picture of every single detail in the component/object, even the slightest change (like fixing a typo in a CSS class) will fail the snapshot test. As a result, you end up with tests that break even when the code still works. These false negatives create a lot of noise and erode your confidence in your testing suite. Snapshot tests are super easy to create and failing snapshots are easy to fix You might be thinking, “Isn’t this a good thing?” After all, being a single keypress away from a passing test suite sounds like a dream come true. However, because the tests are so easy to create/update, what tends to happen is that developers care less about the snapshot tests. In my experience, developers will often simply press the button to update the snapshots without looking to see what changed or if the code is broken. While it is possible to treat your snapshots with the same importance as your code (and recommended in the Jest docs), it requires a ton of diligence. More often, my experience has been seeing engineers blindly update the snapshots and move on with their day (I’ve done it myself many times in the past 😱). Snapshots can give you false confidence about the robustness of your testing suite It’s easy to generate a ton of test coverage using snapshots. If your team has a coverage threshold that all code has to meet, snapshots make hitting your coverage numbers a breeze. However, test coverage alone is not a sufficient metric to use to evaluate the quality of your test suite. While test coverage is a valuable tool for seeing gaps in your testing suite, it doesn’t tell you about things like whether your tests are brittle, whether your code stands up to edge cases, or whether the tests accurately test the business requirements. Where Jest snapshots shine — refactoring legacy code While I’m not a fan of having snapshots as “long-term residents” of my testing suites, I’ve actually come across a few use cases where they truly shine. For example, refactoring legacy code. Rarely do we start a job and get thrown into greenfield projects — we get codebases that have existed for years. And when we do, those projects can quickly go from a blank slate to nightmare codebase if we’re not careful. At some point in your career, you’re going to have to work on “legacy code” that you didn’t write. And many times those codebases don’t have any tests. When you start adding features to this legacy code, and you’re faced with a dilemma. You might need to refactor the code to fit new business requirements, but you don’t want to run the risk of breaking something. In order to refactor safely, you need some type of tests in place. The thing is, taking a pause to write tests for legacy code can sometimes feel like a luxury you don’t have. After all, you’ve got deadlines to hit, and you finally figured out where you need to modify this legacy code. If you take too long of a break you might lose that context that you’ve built up! Snapshots can actually be super useful to us in this scenario. Here’s a snapshot testing workflow I’ve found super helpful when working with legacy code. Step 1: Write snapshots to cover as many inputs as you can think of Read through the legacy code and try to get a picture of all of the various inputs that it could possibly have. However, you don’t need to figure out the outputs! For each input variant, create a snapshot test. This helps you figure out what outputs are actually produced by the code you’re working with. Step 2: Start refactoring Since you’ve got this massive safety net of snapshot tests to fall back on, start refactoring. Remember that this method of refactoring with snapshots is only good if you do not change the output at all. So if you’re working with a React component and you change the rendered output your snapshots will fail. This isn’t the end of the world, just make sure to check why the snapshots failed and if the change was actually intended. Step 3: Ditch the snapshots and write some more focused tests Once you’re done refactoring, you can safely replace these snapshots without fear of forgetting how you wanted to refactor the legacy code. However, for the reasons discussed above, you might not want those snapshots to be long-term residents of your testing suite. Now that the code isn’t changing, you can safely start refactoring your tests. To make your tests more resilient long-term, you might want to consider taking each snapshot test and replacing it with a more focused assertion. For example, we could replace the snapshot test from before with this test using react-testing-library and jest-dom. import { render } from "react-testing-library"; import "jest-dom/extend-expect"; function Test({ message }) { return {message}; } test("renders", () => { const { getByText } = render(<Test message="test" />); expect(getByText("test")).toBeInTheDocument(); }); Granted, this isn’t an incredibly complex test — the component doesn’t have any logic to refactor! These more focused assertions will stand the test of time (pun intended 😂) better as the component changes with future requirements. Conclusion Throughout my (short) career I’ve seen lots of code written without tests by people that have long left the company. It’s no secret that tricky, dense, difficult-to-read code has a negative effect on team morale and that, over time, code should be lovingly refactored to fit new requirements. However, mocking or complaining about tricky legacy code shouldn’t be our default response — instead, we should try to always leave the code in better shape than when we found it. This is easier said than done, especially when we’re trying to meet a tight deadline or if we’re afraid to touch the code lest we break something. This method of using Jest snapshots has been incredibly useful for me and I hope that you will find it useful too! Thanks for reading! If you enjoyed this post, make sure to follow me on Twitter — I make sure to post links to any new articles as I write them. If you’ve had some snapshot test success stories, don’t hesitate to reach out!.
https://blog.logrocket.com/refactoring-legacy-code-with-jest-snapshots-e290ceccccc3/
CC-MAIN-2019-43
refinedweb
1,720
68.81
This is the second post in a three-part series about how to write a simple Ruby extension that helps deal with encrypted JSON messages (read part one here). The complete extension code is available here. In the last post we saw how to automatically decrypt JSON messages and display the well-formatted plaintext in a custom tab. This enabled you to easily read encrypted JSON requests captured by the Proxy tool. In this post I will explain how to automatically encrypt JSON request values before sending it to the remote server. You will be able to write or modify any plaintext JSON and send it encrypted to the remote server using the Repeater tool. Encrypting JSON Let's say now the server accepts an encrypted JSON, processes it and returns the result. To keep it simple here, the fake server will simply return the decrypted value: As can be seen, sending the original encrypted JSON in a POST request will result in the same decrypted values you had before. Now you want to be able to modify the plaintext JSON and send it encrypted to the server. To do so, you will need to process the message in the #getMessage method: def getMessage is_request = @txt_input.text[0..3].to_s == "HTTP" if is_request info = @helper.analyze_request(@txt_input.text) else info = @helper.analyze_response(@txt_input.text) end headers = @txt_input.text[ 0..(info.get_body_offset - 1) ].to_s body = @txt_input.text[ info.get_body_offset..-1 ].to_s body = process_json(body, :encrypt) if json?(info, is_request) return (headers + body).to_java_bytes end As you can see, the logic is quite similar to #setMessage. It encrypts instead of decrypting and it returns the encrypted message instead of filling the text editor. Note that it is mandatory to use #to_java_bytes to be accepted by Burp. You can now modify the JSON in the "JSON Crypto Helper" tab, check the encrypted version in the Raw tab and send the request. The server successfully decrypts the JSON and returns the plain text: Parting Thoughts With these first two posts we covered the basics of Burp Extender tool and learned how to write a simple ruby-based extension to decrypt a JSON, displaying it in the Burp interface and automatically encrypt plaintext values using the Repeater tool. This should be a good start for you to explore the Burp API and create your own extensions. In the next post I will explain how to go further and do more awesome things with Burp Extender and the Intruder tools. Stay tuned!
https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/json-crypto-helper-a-ruby-based-burp-extension-for-json-encryptiondecryption-part-ii/
CC-MAIN-2019-47
refinedweb
416
62.58
I'm having some trouble defining a multivariate gaussian pdf for quadrature using scipy. I write a function that takes a mean vector and covariance matrix as input and returns a gaussian function. def make_mvn_pdf(mu, sigma): def f(x): return sp.stats.multivariate_normal.pdf(x, mu, sigma) return f # define covariance matrix Sigma = np.asarray([[1, .15], [.15, 1]]) # define propagator B = np.diag([2, 2]) # define data Obs = np.array([[-0.06895746],[ 0.18778 ]]) # define a Gaussian PDF: g_int_func = make_mvn_pdf(mean = np.dot(B,Obs[t,:]), cov = Sigma) testarray=np.random.random((2,2)) g_int_func(testarray) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-50-083a1915f674> in <module>() 1 g_int_func = make_mvn_pdf(np.dot(B,Obs[t,:]),Gamma) ----> 2 g_int_func(testarray) /Users/...in f(x) 17 def make_mvn_pdf(mu, sigma): 18 def f(x): ---> 19 return sp.stats.multivariate_normal.pdf(x, mu, sigma) 20 return f 21 /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/stats/_multivariate.pyc in pdf(self, x, mean, cov, allow_singular) 427 428 """ --> 429 dim, mean, cov = _process_parameters(None, mean, cov) 430 x = _process_quantiles(x, dim) 431 psd = _PSD(cov, allow_singular=allow_singular) /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/stats/_multivariate.pyc in _process_parameters(dim, mean, cov) 54 55 if mean.ndim != 1 or mean.shape[0] != dim: ---> 56 raise ValueError("Array 'mean' must be a vector of length %d." % dim) 57 if cov.ndim == 0: 58 cov = cov * np.eye(dim) ValueError: Array 'mean' must be a vector of length 2. The value that you give as the mean is np.dot(B, Obs) (taking into account the change you suggested in a comment), where B has shape (2, 2) and Obs has shape (2, 1). The result of that dot call has shape (2, 1). The problem is that is a two-dimensional array, and multivariate_normal.pdf expects mu to be a one-dimensional array, i.e. an array with shape (2,). (The error message uses the word "vector", which is a poor choice, because for many people, an array with shape (n, 1) is a (column) vector. It would be less ambiguous if the error message said "Array 'mean' must be a one-dimensional array of length 2.") There are at least two easy ways to fix the problem: Obshas shape (2,) instead of (2, 1), e.g. Obs = np.array([-0.06895746, 0.18778]). The np.dot(B, Obs)has shape (2,). meanargument by using the ravelmethod: mean=np.dot(B,Obs).ravel().
https://codedump.io/share/WkJ3fqYUmI18/1/defining-multivariate-gaussian-function-for-quadrature-with-scipy
CC-MAIN-2016-44
refinedweb
423
52.36
REpresentational State Transfer (REST) is a web development architecture design style which refers to logically separating your API resources so as to enable easy access, manipulation and scaling. Reusable components are written in a way so that they can be easily managed via simple and intuitive HTTP requests which can be GET, POST, PUT, PATCH, and DELETE (there can be more, but above are the most commonly used ones). Despite what it looks like, REST does not command a protocol or a standard. It just sets a software architectural style for writing web applications and APIs, and results in simplification of the interfaces within and outside the application. Web service APIs which are written so as to follow the REST principles, they are called RESTful APIs. In this three-part tutorial series, I will cover different ways in which RESTful APIs can be created using Flask as a web framework. In the first part, I will cover how to create class-based REST APIs which are more like DIY (Do it yourself), i.e. implementing them all by yourself without using any third-party extensions. In the latter parts of this series, I will cover how to leverage various Flask extensions to build more effective REST APIs in an easier way. I assume that you have a basic understanding of Flask and environment setup best practices using virtualenv to be followed while developing a Python application. Installing Dependencies The following packages need to installed for the application that we'll be developing. $ pip install flask $ pip install flask-sqlalchemy The above commands should install all the required packages that are needed for this application to work. The Flask Application For this tutorial, I will create a small application in which I will create a trivial model for Product. Then I will demonstrate how we can write a RESTful API for the same. Below is the structure of the application. flask_app/ my_app/ - __init__.py product/ - __init__.py // Empty file - models.py - views.py - run.py I won't be creating a front-end for this application as RESTful APIs endpoints can be tested directly by making HTTP calls using various other methods. flask_app/my_app/__init__.py from flask import Flask from flask.ext.sqlalchemy import SQLAlchemy app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:////tmp/test.db' db = SQLAlchemy(app) from my_app.catalog.views import catalog app.register_blueprint(catalog) db.create_all() In the file above, the application has been configured/catalog/models.py from my_app import db class Product(db.Model): id = db.Column(db.Integer, primary_key=True) name = db.Column(db.String(255)) price = db.Column(db.Float(asdecimal=True)) def __init__(self, name, price): self.name = name self.price = price def __repr__(self): return '<Product %d>' % self.id In the file above, I have created a very trivial model for storing the name and price of a Product. This will create a table in SQLite corresponding to the details provided in the model. flask_app/my_app/catalog/views.py import json from flask import request, jsonify, Blueprint, abort from flask.views import MethodView from my_app import db, app from my_app.catalog.models import Product catalog = Blueprint('catalog', __name__) @catalog.route('/') @catalog.route('/home') def home(): return "Welcome to the Catalog Home." class ProductView(MethodView): def get(self, id=None, page=1): if not id: products = Product.query.paginate(page, 10).items res = {} for product in products: res[product.id] = { 'name': product.name, 'price': str(product.price), } else: product = Product.query.filter_by(id=id).first() if not product: abort(404) res = { 'name': product.name, 'price': str(product.price), } return jsonify(res) def post(self): name = request.form.get('name') price = request.form.get('price') product = Product(name, price) db.session.add(product) db.session.commit() return jsonify({product.id: { 'name': product.name, 'price': str(product.price), }}) def put(self, id): # Update the record for the provided id # with the details provided. return def delete(self, id): # Delete the record for the provided id. return product_view = ProductView.as_view('product_view') app.add_url_rule( '/product/', view_func=product_view, methods=['GET', 'POST'] ) app.add_url_rule( '/product/<int:id>', view_func=product_view, methods=['GET'] ) The major crux of this tutorial is dealt with in the file above. Flask provides a utility called pluggable views, which allows you to create views in the form of classes instead of normally as functions. Method-based dispatching ( MethodView) is an implementation of pluggable views which allows you to write methods corresponding to the HTTP methods in lower case. In the example above, I have written methods get() and post() corresponding to HTTP's GET and POST respectively. Routing is also implemented in a different manner, in the last few lines of the above file. We can specify the methods that will be supported by any particular rule. Any other HTTP call would be met by Error 405 Method not allowed. Running the Application To run the application, execute the script run.py. The contents of this script are: from my_app import app app.run(debug=True) Now just execute from the command line: $ python run.py To check if the application works, fire up in your browser, and a simple screen with a welcome message should greet you. Testing the RESTful API To test this API, we can simply make HTTP calls using any of the many available methods. GET calls can be made directly via the browser. POST calls can be made using a Chrome extension like Postman or from the command line using curl, or we can use Python's requests library to do the job for us. I'll use the requests library here for demonstration purposes. Let's make a GET call first to assure that we don't have any products created yet. As per RESTful API's design, a get call which looks something like /product/ should list all products. Then I will create a couple of products by making POST calls to /product/ with some data. Then a GET call to /product/ should list all the products created. To fetch a specific product, a GET call to /product/<product id> should do the job. Below is a sample of all the calls that can be made using this example. $ pip install requests $ python >>> import requests >>> r = requests.get('') >>> r.json() {} >>> r = requests.post('', data={'name': 'iPhone 6s', 'price': 699}) >>> r.json() {u'1': {u'price': u'699.0000000000', u'name': u'iPhone 6s'}} >>> r = requests.post('', data={'name': 'iPad Pro', 'price': 999}) >>> r.json() {u'2': {u'price': u'999.0000000000', u'name': u'iPad Pro'}} >>> r = requests.get('') >>> r.json() {u'1': {u'price': u'699.0000000000', u'name': u'iPhone 6s'}, u'2': {u'price': u'999.0000000000', u'name': u'iPad Pro'}} >>> r = requests.get('') >>> r.json() {u'price': u'699.0000000000', u'name': u'iPhone 6s'} Conclusion In this tutorial, you saw how to create RESTful interfaces all by yourself using Flask's pluggable views utility. This is the most flexible approach while writing REST APIs but involves much more code to be written. There are extensions which make life a bit easier and automate the implementation of RESTful APIs to a huge extent. I will be covering these in the next couple of parts of this tutorial<<
https://code.tutsplus.com/tutorials/building-restful-apis-with-flask-diy--cms-26625
CC-MAIN-2021-04
refinedweb
1,204
60.01
Lesson 4 - Data processing and validations in ASP.NET Core MVC In the previous lesson, Form handling in ASP.NET Core MVC, we started with form handling and wiring the model to the view. We're gonna finish the simple calculator in today's C# .NET tutorial. Submitting forms When we submit our form, nothing happens yet. This is because the form sends data to the same page. The request is then captured by the Index() method in the controller, which looks like this: public IActionResult Index() { Calculator calculator = new Calculator(); return View(calculator); } Even after submitting the form, a new calculator instance is created, which has a zero result and is set to the view. Therefore, our result is still zero. The situation of submitting a form to a page, which should process it, is sometimes called PostBack. We need to respond to this situation in a different way than rendering the page meant for the situation when the user hasn't sent anything yet. We add another Index() method to the controller, this time with a parameter and the HttpPost attribute: [HttpPost] public IActionResult Index(Calculator calculator) { if (ModelState.IsValid) { calculator.Calculate(); } return View(calculator); } In the method, we get the calculator instance as a parameter as it was created from the values in the submitted form. If the model is valid, which we can find out using the ModelState class, we call the Compute() method. Finally, we return the view to which we pass the given model with the values and the result. The method is marked with the [HttpPost] attribute. By that we say that we wish it to be called only if the form has been submitted. ASP.NET always calls the overload method that best suits the situation. So the form submission won't fall into the first Index() method anymore, but into the new one. We can try the application to see the result of the operation: Other fields remain filled as well because the value is retrieved from the model that has it set from POST. GET and POST It would be good to say that besides the POST method, there's also a GET method. Both methods can be used to send data to our application. We already know that we use the POST method to submit form values. Data is sent within the HTTP request to the server. Although it doesn't need to be strictly followed, the POST method is primarily used to submit new data. Using the GET method, we pass data through the URL address. If we wanted to force it in a controller method, we could use the [HttpGet] attribute similarly as we did it with POST. Let's create a simple example of using a parameter passed by the GET method. For example, if we wanted to specify a dedication in the URL for whom the calculator was created, the URL would look like this: Or like this if we provided the full form: Your port will, of course, be different again. To read this value, we'll move to the first Index() method to which we'll add a parameter. We'll pass it to the view using ViewBag: public IActionResult Index(string name) { Calculator calculator = new Calculator(); ViewBag.Name = name; return View(calculator); } We'll print the parameter in the view in the <h2> heading: <h2> Calculator @if (ViewBag.Name != null) { <text> for @ViewBag.Name</text> } </h2> If the name GET parameter is set (not null), we write "for" and the content of this parameter to the heading. If we print text in the @if construct, it should be wrapped in the <text> element; The result: We could just as well pass the parameter to the model instead to the view. I have no idea how to use it in our calculator, however, in the app with the random number generator, such URL parameter could indicate how many numbers we want to generate. So we know how to pass data to the server script either through the URL using the GET method or through the body of the HTTP request using the POST method. POST is mainly used for forms. If we want the value to be entered as a link, we use GET. Labels Form field labels contain text with the name of the property they are bound to. Labels such as "Number1" aren't very nice for the user. Therefore, we'll add attributes with more descriptive names to the properties in the Calculator class: public class Calculator { [Display(Name = "1st number")] public double Number1 { get; set; } [Display(Name = "2nd number")] public double Number2 { get; set; } public double Result { get; set; } [Display(Name = "Operation")] public string Operation { get; set; } public List<SelectListItem> PossibleOperations { get; set; } // ... We need to add using System.ComponentModel.DataAnnotations; to the attributes. The result: Validation The last topic that we're gonna try on our calculator is validation. You must have already found out that if you don't enter a number or you manage to submit a string instead of a number, ASP.NET won't allow you to submit the form. Validations are generated automatically based on the data types of the given model property and are both client-side and server-side. If you enter an invalid input, it'll be captured by a JavaScript validator before sending it to the server. Therefore, the request won't be sent at all. To make sure this works every time, the same validation must be on the server because the user can, for example, disable JavaScript. We provide additional validation requirements as attributes. The [Required] attribute allows us to specify that the input field is required. All non-nullable value types (e.g. int, decimal, DateTime...) are considered as Required automatically. For numbers, we can also validate their range, for example: [Range(1, 100, ErrorMessage = "Enter the number from 1 to 100.")] Note that using validation attributes, we can easily specify the message that is displayed to the user when the value is entered incorrectly. For strings, we can validate their length, for example: e.g. [StringLength(5)] Or validate them using regular expressions: [RegularExpression("\\d+", ErrorMessage = "Invalid code")] You can try that the messages will really show up. For some fields (for example, for the numbers) these are created automatically by the framework using JavaScript, or the browser will customize it itself. Try to type letters in the field for the 1st number: The calculator model is now full of attributes and is customized for the View. Such models are often called ViewModels. You should know this term if you've ever worked with WPF. That would be all for our calculator. In the next lesson, Modifying the MVC template in ASP.NET Core, we'll start something more interesting. It'll be a personal blog with an administration. The source codes of today's project are available for download below. Download Downloaded 2x (1.31 MB) Application includes source codes in language C# No one has commented yet - be the first!
https://www.ict.social/csharp/asp-net/core/basics/data-processing-and-validations-in-aspnet-core-mvc/
CC-MAIN-2020-16
refinedweb
1,177
63.59
). Here's our goal for this example. We have a shapefile with the major roads of the United States which you can download a sample dataset here. latest version of PyShp which provides the newer iterShapeRecords() method to provide and efficient iterator for both the shapes and the dbf attributes in a single object: import shapefile # Create a reader instance for our US Roads shapefile r = shapefile.Reader("roadtrl020") # Create a writer instance copying the reader's shapefile type w = shapefile.Writer(r.shapeType) # Copy the database fields to the writer w.fields = list(r.fields) # Our selection box that contains Puerto Rico xmin = -67.5 xmax = -65.0 ymin = 17.8 ymax = 18.6 # Iterate through the shapes and attributes at the same time for road in r.iterShapeRecords(): # Shape geometry geom = road.shape # Database attributes rec = road.record # Get the bounding box of the shape (a single road) sxmin, symin, sxmax, symax = geom.bbox # Compare it to our Puerto Rico bounding box. # go to the next road as soon as a coordinate is outside the box if sxmin < xmin: continue elif sxmax > xmax: continue elif symin < ymin: continue elif symax > ymax: continue # Road is inside our selection box. # Add it to the new shapefile w._shapes.append(geom) w.records.append(rec) # Save the new shapefile! (.shp, .shx, .dbf) w.save("Puerto_Rico_Roads") And here's our result! This comment has been removed by the author.
http://geospatialpython.com/2015/05/clipping-shapefile-in-pure-python.html
CC-MAIN-2017-26
refinedweb
237
68.67
Enterprise Java is a popular subject these days. An increasing number of web applications are being deployed using application servers which conform to the Java 2 Platform Enterprise Edition (J2EE) standard. J2EE adds a set of extensions to server-side Java which aid in writing distributed and multi-tier applications. If youre completely new to J2EE, you may want to review the pages at. The J2EE standard covers several technologies, including EJB (Enterprise JavaBeans), Servlets, JSP (JavaServer Pages), JNDI (Java Naming and Directory Interface), and JDBC (Java DataBase Connectivity). In this article, Ill try to give you all the information you need to start deploying and testing your J2EE applications on Mac OS X using the open-source servers JBoss and Jetty. JBoss is an open-source J2EE application server that claims to be the most downloaded J2EE web server in the world. The JBoss product is open source, but the developers encourage users to buy their documentation. There are several documentation options, all detailed at. This article covers only a tiny fraction of JBosss capabilities. If youre going to use JBoss, I strongly recommend buying the documentation. Jetty is an open-source HTTP server and Servlet container. Its available for download in a bundle with the JBoss server. You can also download JBoss with the popular open-source Tomcat server. The combination of JBoss with Jetty (or Tomcat) provides a complete J2EE container. Since Mac OS X 10.3 includes JBoss, these installation instructions are only relavent to non-server versions of Mac OS X. All versions of Mac OS X come with Java 2, so pure-Java applications like JBoss and Jetty can be rapidly deployed. First, get the current stable JBoss/Jetty bundle from. Versions of JBoss after 3.0 have Jetty included by default. In this article, Ill be using JBoss-3.0.2.zip. Pick an installation location (I picked /usr/local/jboss) and make sure you can write to the folder. /usr/local/jboss shell> sudo mkdir /usr/local/jboss shell> sudo chown liz:staff /usr/local/jboss Then move the .zip file to its installation base directory (if it isnt there already) and unzip it using the jar command. .zip jar shell> mv jboss-3.0.2.zip /usr/local/jboss shell> cd /usr/local/jboss shell> jar -xf jboss-3.0.2.zip Now youll need to make the startup scripts executable. shell> cd jboss-3.0.2/bin/ shell> chmod ug+x *.sh If you happen to be using a version of JBoss prior to 3.0, you may have to fix a glitch in one of the startup scripts. This bug has been fixed in JBoss 3.0. In earlier versions, the run.sh script incorrectly identifies a HotSpot server VM (Mac OS X comes standard with only the HotSpot client VM). You can fix this problem by replacing this line in jboss/bin/run.sh: run.sh jboss/bin/run.sh HOTSPOT=`java -version 2>&1 | grep HotSpot`"x" with this one HOTSPOT=`java -version 2>&1 | grep HotSpot | grep Server`"x" Now youre ready to start JBoss and Jetty: shell> bin/run.sh The JBoss/Jetty bundle comes pre-configured and ready to run simple J2EE applications. One of the nicer features of JBoss is its ability to instantly deploy an application. Once youve created your application, drop it into the jboss/deploy directory and it will be loaded. Theres no need to re-start the server. jboss/deploy In order to compile and deploy the example code in this article, youll need to install the Jakarta projects open-source build tool, called Ant. Im using version 1.4.1 and installed it in /usr/local/ant, with these commands: /usr/local/ant shell> sudo mkdir /usr/local/ant shell> sudo chown liz:staff /usr/local/ant shell> mv jakarta-ant-1.4.1-bin.tar.gz /usr/local/ant shell> cd /usr/local/ant shell> tar -xzvf jakarta-ant-1.4.1-bin.tar.gz In order to use Ant, youll want to have the environment variable $JAVA_HOME set, and you may want to add the Ant bin directory to your $PATH. bin shell> export JAVA_HOME=/usr shell> export PATH=$PATH:/usr/local/ant/jakarta-ant-1.4.1/bin Of course, you can put these commands into a dotfile (a Unix shell startup file) so theyre permanently available. See the man page for your default shell for how to set up dotfiles (for example man tcsh or man sh). Click Terminal/Preferences/Shell from within the Terminal application if you dont know which shell youre running. man tcsh man sh Terminal/Preferences/Shell JBoss is an EJB 1.1-compliant application server. If youre completely new to EJB, I recommend that you review the pages at. Why use Enterprise JavaBeans? The EJB container (the environment in which your beans run, like JBoss) can easily handle some typically cumbersome tasks. It can handle transactions, session management, and can even abstract database code if you want it to. EJB technology also makes it easier to write scalable applications, with code distributed across more than one machine. There are three types of Enterprise JavaBeans: Entity, Session, and Message-Driven. Ill show you examples of Entity and Session beans in this article. Session beans represent a single clients session. They can be either stateful or stateless. They do not persist beyond a single clients session. Entity beans represent objects maintained in persistent storage (like a database). One of the cool things about using EJB technology is that your application server handles all kinds of tricky things for you. The cost of all these free goodies is that your applications are forced to have a fairly complicated setup process. But after creating a J2EE application or two, youll get the hang of it. And if you shelled out for a graphical J2EE development environment, that tool will automate most or all of the setup process for you. For now, though, Ill show you how to create some simple J2EE applications using only command-line tools. Heres a snapshot of the files necessary to create a Hello World application consisting of a servlet and a session bean. Here is a tarball that should be helpful for the HelloWorld application, as well as HelloEntity and MailList which we explore below: The J2EE Platform Specification calls for class and XML files to be bundled together in a specific format for deployment. Web objects, like servlets and their associated web.xml files, go into a WAR (Web ARchive) file. EJB classes and their XML config files go into a JAR (Java ARchive) file. Then both those archives get bundled with a file called application.xml into a new EAR (Enterprise ARchive) file. Once thats done, you can deploy your new web application by copying the EAR file into your jboss/deploy directory. web.xml application.xml First, lets look at the XML files. The first two define a session bean called HelloWorld. The jboss.xml file is not strictly required for this application, but Ive included it anyhow. This is the file that would contain vendor-specific configuration information relating to your EJBs. Another file, called jboss-web.xml (not shown) can be used for JBoss-specific configuration relating to your web application. HelloWorld jboss.xml jboss-web.xml shell>cat ejb-jar.xml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE ejb-jar PUBLIC '-//Sun Microsystems, Inc.//DTD Enterprise JavaBeans 1.1//EN' ''> <ejb-jar> <display-name>Hello World</display-name> <enterprise-beans> <session> <description>Hello World EJB</description> <display-name>HelloWorld</display-name> <ejb-name>HelloWorld</ejb-name> <home>HelloWorldHome</home> <remote>HelloWorld</remote> <ejb-class>HelloWorldEJB</ejb-class> <session-type>Stateless</session-type> <transaction-type>Container</transaction-type> </session> </enterprise-beans> </ejb-jar> shell> cat jboss.xml <?xml version="1.0"?> <jboss> <enterprise-beans> <session> <ejb-name>HelloWorld</ejb-name> </session> </enterprise-beans> </jboss> The web.xml file should look familiar to you if youve worked with other servlet engines, or have read the Apple Internet Developer articles on Tomcat. This project includes a bare-bones web.xml file. shell> cat web.xml <?xml version="1.0" encoding="ISO-8859-1"?> <!DOCTYPE web-app PUBLIC '-//Sun Microsystems, Inc.//DTD Web Application 2.2//EN' ''> <web-app> <display-name>Hello World</display-name> <description>Hello World</description> <servlet> <servlet-name>HelloWorldServlet</servlet-name> <servlet-class>HelloWorldServlet </servlet-class> </servlet> <servlet-mapping> <servlet-name>HelloWorldServlet</servlet-name> <url-pattern>/HelloWorlds</url-pattern> </servlet-mapping> <ejb-ref> <ejb-ref-name>HelloWorldHome</ejb-ref-name> <ejb-ref-type>Session</ejb-ref-type> <home>HelloWorldHome</home> <remote>HelloWorld</remote> <ejb-link>HelloWorld</ejb-link> </ejb-ref> </web-app> Finally, heres the application.xml file, with information about the application as a whole. shell> cat application.xml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE application PUBLIC '-//Sun Microsystems, Inc.//DTD J2EE Application 1.2//EN' ''> <application> <display-name>Hello World</display-name> <description>Hello World</description> <module> <ejb>HelloWorld.jar</ejb> </module> <module> <web> <web-uri>HelloWorld.war</web-uri> <context-root>HelloWorld</context-root> </web> </module> </application> Next, lets look at the Java code in HelloWorld/src. The first two files, HelloWorld.java and HelloWorldHome.java, define the home and remote interfaces for the new session bean. HelloWorld/src HelloWorld.java HelloWorldHome.java shell> cat HelloWorld.java import java.rmi.*; import javax.ejb.*; public interface HelloWorld extends EJBObject { public String hi() throws RemoteException; } shell> cat HelloWorldHome.java import java.rmi.*; import javax.ejb.*; public interface HelloWorldHome extends EJBHome { public HelloWorld create() throws RemoteException, CreateException; } Next is the class file for the bean. This is the file that contains your business logic. It creates a string with the contents hiya. Finally, heres the servlet code. Now that youve got a pile of .xml files and a pile of .java files, you can use Ant to compile your code, bundle it, and deploy it to your application server. By default, when Ant is run, it looks for a file in the current directory called build.xml and will follow the directions inside. Heres the build.xml for the Hello World application. .xml .java build.xml With all these files in place, youre ready to run Ant. shell> ant On success, you should see output something like this. Once your application is deployed, you can visit in your browser to see all that business logic in action. An entity bean represents an object in persistent storage. There are two ways to handle persistence — user-managed and container-managed. In user-managed persistence, the code author must explicitly store and retrieve the beans data from storage, usually via JDBC calls. In container-managed persistence, the EJB container handles all the storage and retrieval behind the scenes. JBoss comes pre-configured to use the HypersonicSQL embedded database, which means you can run this example code without changing your JBoss configuration. If youd like to use a different database, such as MySQL, see the JBoss documentation. Heres another simple web application. This one uses an entity bean to create, store, and retrieve user records. A user record consists of two data fields: name, and email. Notice that even though the records are being stored in the HypersonicSQL database, I didnt write a line of database code. Instead, I specify the fields in my ejb-jar.xml file, and rely on my EJB container to manage persistence. Here is a listing of the other XML files this application uses. ejb-jar.xml Now to the Java source. First, the remote and home interfaces, the entity bean, the servlet, and the build.xml file. You might have noticed by now that the build files look a lot alike. In fact, the only thing I really had to change was the appname property (although I also changed the project name). The rest of the file is pretty standard, and I hope you can re-use it for your projects. You will have to customize your build.xml files further, of course, once youre building more complicated applications, using packages, and requiring more libraries in your compilation path. My advice is that you either get to know Ant, or invest in a Java IDE with project management capabilities. Once deployed (again, just type ant once your files and directories are in place), your application should be available via. ant Transactions are multi-part actions that are required to function as one unit. Either all actions succeed together, or they all fail. Just as an entity beans data persistence may be user- or container-managed, a J2EE transaction can be handled by your bean code, or handed off to the container. If the container is managing a transaction associated with a certain method, it will automatically roll back any database calls in that method upon receiving a system exception. For more on transactions, visit this tutorial on java.sun.com. You can define container-managed transactions in your beans deployment descriptor (the ejb-jar.xml file). See the assembly-descriptor section of for full details. In general, your code will look something like this: [...declaration for a bean called "MyExample"...] <transaction-type>Container</transaction-type> [...] <assembly-descriptor> <container-transaction> <method> <ejb-name>MyExample</ejb-name> <method-name>*</method-name> </method> <trans-attribute>Required</trans-attribute> </container-transaction> </assembly-descriptor> JBoss comes with a built-in transaction manager. Like other JBoss component parts, it can be replaced with another JTA (Java Transaction API) transaction manager implementation. See the JBoss documentation for instructions. Ill wrap up this article with a simple web application to manage an announcement mailing list. Two servlets take care of the user interface for adding/removing users and sending announcements. A container-managed entity bean handles the user data. There are several things lacking in this example application, most notably bounce handling, error checking, and any kind of security. For security reasons, I dont recommend installing this demo in its existing form on a publicly accessible server for any length of time. If you plan on extending this code to create your own real-world application, note that JBoss makes it relatively easy to password-protect your applications (once again, via XML configuration files). See the JBoss documentation for more information. Here are the files that make up this last application. First, a listing of the xml files, the bean interfaces, the bean class, and the two servlets. Once youve built and deployed your application using Ant, you should see pages like these. Ive fudged the example email addresses in the screen shots, but you should get the idea. In this article, youve seen how to deploy some simple J2EE web applications using JBoss, Jetty, and Ant. Youve also seen a few of the advantages of the EJB architecture. Being able to hand off whole sections of complex code to your EJB container is a non-trivial advantage, and in my opinion its well worth the learning curve. To learn more about J2EE and EJB, you can visit the excellent tutorials at java.sun.com. The JBoss documentation is also invaluable. For books on J2EE and Java, visit java.oreilly.com. Get information on Apple products. Visit the Apple Store online or at retail locations. 1-800-MY-APPLE
http://devworld.apple.com/internet/java/enterprisejava.html
crawl-002
refinedweb
2,544
50.12
Tracking outgoing email from the mail logs with pymongo Mail logs are sent to a centralized syslog. I have a simple Python script that tails the common mail log file every 5 minutes, counts the lines that conform to a specific regular expression (looking for a specific msgid pattern), then inserts that count into a MongoDB database. Here's the snippet of code that does that: import datetime from pymongo import Connection conn = Connection(host="myhost.example.com") db = conn.logs maillogs = db.mail d = {} now = datetime.now() d['insert_time'] = now d['msg_count'] = msg_count maillogs.save(d) I use the pymongo module to open a connection to the host running the mongod daemon, then I declare a database called logs and a collection called maillogs within that database. Note that both the database and the collection are created on the fly in case they don't exist. I then instantiate a Python dictionary with two keys, insert_time and msg_count. Finally, I use the save method on the maillogs collection to insert the dictionary into the MongoDB logs database. Can't get any easier than this. Visualizing the outgoing email count with graph_viz I have another simple Python script which queries the MongoDB logs database for all documents that have been inserted in the last hour. Here's how I do it: MINUTES_AGO=60 conn = Connection() db = conn.logs maillogs = db.mail now = datetime.datetime.now() minutes_ago = now + datetime.timedelta(minutes=-MINUTES_AGO) rows = maillogs.find({'insert_time': {"$gte": minutes_ago}}) As an aside, when querying MongoDB databases that contain documents with timestamp fields, the datetime module will become your intimate friend. Just remember that you need to pass datetime objects when you put together a pymongo query. In the case above, I use the now() method to get the current timestamp, then I use timedelta with minutes=-60 to get the datetime object corresponding to 'now minus 1 hour'. The gviz_api module has decent documentation, but it still took me a while to figure out how to use it properly (thanks to my colleague Dan Mesh for being the trailblazer and providing me with some good examples). I want to graph the timestamps and message counts from the last hour. Using the pymongo query above, I get the documents inserted in MongoDB during the last hour. From that set, I need to generate the data that I am going to pass to gviz_api: chart_data = [] for row in rows: insert_time = row['insert_time'] insert_time = insert_time.strftime(%H:%M') msg_count = int(row['msg_count']) chart_data.append([insert_time, msg_count]) jschart("Outgoing_mail", chart_data) In my case, chart_data is a list of lists, each list containing a timestamp and a message count. I pass the chart_data list to the jschart function, which does the Google Visualization magic: def jschart(name, chart_data): description = [ ("time", "string"), ("msg_count", "number", "Message count"), ] data = [] for insert_time, msg_count in chart_data: data.append((insert_time, msg_count)) data_table = gviz_api.DataTable(description) data_table.LoadData(data) # Creating a JSON string json = data_table.ToJSon() name = "OUTGOING_MAIL" html = TEMPL % {"title" : name, "json" : json} open("charts/%s.html" % name, "w").write(html) The important parts in this function are the description and the data variables. According to the docs, they both need to be of the same type, either dictionary or list. In my case, they're both lists. The description denotes the schema for the data I want to chart. I declare two variables I want to chart, insert_time of type string, and msg_count of type number. For msg_count, I also specify a user-friendly label called 'Message count', which will be displayed in the chart legend. After constructing the data list based on chart_data, I declare a gviz_api DataTable, I load the data into it, I call the ToJSon method on it to get a JSON string, and finally I fill in a template string, passing it a title for the chart and the JSON data. The template string is an HTML + Javascript snippet that actually talks to the Google Visualization backend and tells it to create an Area Chart. Click on this gist to view it. That's it. I run the gviz_api script every 5 minutes via crontab and I generate an HTML file that serves as my dashboard. I can easily also write a Nagios plugin based on the pymongo query, which would alert me for example if the number of outgoing email messages is too low or too high. It's very easy to write a Nagios plugin by just having a script that exits with 0 for success, 1 for warnings and 2 for critical errors. Here's a quick example, where wlimit is the warning threshold and climit is the critical threshold: def check_maillogs(wlimit, climit): # MongoDB conn = Connection() db = conn.logs maillogs = db.mail now = datetime.datetime.now() minutes_ago = now + datetime.timedelta(minutes=-MINUTES_AGO) count = maillogs.find({'insert_time': {"$gte": minutes_ago}}).count() rc = 0 if count > wlimit: rc = 1 if count > climit: rc = 2 print "%d messages sent in the last %d minutes" % (count, MINUTES_AGO) return rc Update #1 See Mike Dirolf's comment on how to properly insert and query timestamp-related fields. Basically, use datetime.datetime.utcnow() instead of now() everywhere, and convert to local time zone when displaying. Update #2 Due to popular demand, here's a screenshot of the chart I generate. Note that the small number of messages is a very, very small percentage of our outgoing mail traffic. I chose to chart it because it's related to some new functionality, and I want to see if we're getting too few or too many messages in that area of the application. 9 comments: Very cool - thanks for the post! One thing to consider is using utcnow() instead of now() throughout, or attaching a timezone to the result of now(). Saving naive datetime instances that aren't UTC, like those returned by now(), is generally a bad idea - the dates inside of the server won't be UTC in that case. This FAQ has some more info: - Mike Thanks for the quick comment and feedback, Mike! I'll definitely follow your advice and use utcnow() throughout. Grig Screenshot is missing ;) I've been using Protovis for visualization stuff. It's probably one of the most impressive projects I've seen this year. Checkout: Ben nice work. got a screenshot of the output or your dashboard? Thanks for all the comments. I updated the blog post with a reference to Mike's comment and with a screenshot. You might be interested by this visualization I created on 15 years of emails. Basically each mail has a date AND a time. So I put a dot for each email: the date on the horizontal axis, the time on vertical axis (0-24h). I am looking for a replacement of CouchDB because I am missing advanced query features. I considered MongoDB but tend now to PostreSQL. I want REST and JSON so I could natively call from a JavaScript client and I would also like to zero out the middleware part (thats the reason why I used CouchDB). I mean I need small middle ware that can provide ultra fast REST/ JSON webservices on top of PostgreSQL. I remeber your talk about restish. Do you think there is something better than restish available now? What made you use MongoDB instead? Cheers, Mark Mark -- I use MongoDB as a convenient data store for the log data that I'm capturing. It's much easier to use in this scenario than a traditional RDBMS. I am not too worried about fault tolerance, high availability etc either, since it's just log data. I think the issue of what frontend to use with it is orthogonal. Grig
http://agiletesting.blogspot.com/2010/07/tracking-and-visualizing-mail-logs-with.html
CC-MAIN-2019-35
refinedweb
1,281
64.91
on exporting 3D element geometry to a WebGL viewer. If this is of interest to you, you absolutely must check out two subsequent The 3D Web Coder posts on the desktop driven WebGL viewer and converting it to a node.js server. WebGL Developers Meetup in NYC April 21 Get ready for our WebGL Developers Meetup in NYC on Tuesday April 21. Come and join Jim Quanci @jimquanci from Autodesk to learn how to easily add 3D viewing functionality to your web and mobile apps using View and Data API. AEC Hackathon in Dallas May 1-3 As an AEC developer, you will certainly not want to miss the AEC Hackathon 2.2 in Dallas. We will be on site to show the View and Data API and of course help all hackers on our AEC products. Angelhack in Dubai May 7-9 Of special interest to me personally, of course: I will be going to the Angelhack in Dubai on May 7-9. Since the weekend is on Friday and Saturday there, we have scheduled an Angel workshop meetup session on Thursday evening, where, once again, I will be presenting on the View and Data API and anything else you want to learn. Then you will be fully primed and up to speed for the subsequent hackathon! I am looking forward to meeting you there! Angelhack in Athens June 5-7 Same thing again... of special interest to me personally: I will be going to the Angelhack in Athens on June 5-7. Again, we have scheduled an Angel workshop meetup session on Friday evening to get us all well prepared and up to speed for the subsequent hackathon. I am looking forward to meeting you there! Wall Area Calculation Handling Multiple Openings in Multiple Walls in Multiple Rooms I recently presented a promising solution to address this in the discussion on calculating gross and net wall areas, publishing the initial VB.NET solution by Phillip Miller of Kiwi Codes Solutions Ltd and an enhanced C# implementation in the SpatialElementGeometryCalculator GitHub repository incorporating the suggestion by Vilo to use FindInserts to retrieve all openings in all wall types. Håkon Clausen responded to that in this comment:I found two issues with this solution after playing around with it for some hours. - In contrast to the "Calculating Gross and Net Wall Areas" solution, after getting the wall inserts with FindInserts it does not check that the inserts actually belongs to the room and thus removes inserts to other rooms if the wall is part of multiple rooms. This will of course lead to wrong result. Checking the instance room/toroom/fromroom parameter like the previous solution rectifies this. - If a wall has multiple faces or subfaces, the opening area is subtracted multiple times. Take e.g. the hall from the 2013 basic sample (or any room with a wall with multiple doors to other rooms. This will make calwallAreaMinusOpenings report a negative wall area. Creating a gross sum for each wall, then looping through the walls found and calculate the opening area once seems to be an approach. I added that to the SpatialElementGeometryCalculator GitHub repository as issue #1, improvements for multiple rooms and multiple subfaces, and suggested to Håkon to fork the repo, add his changes to that, and create a pull request for me to merge them back in. He did so and created pull request #2, rework based on comment on blog. For your convenience, here is the updated implementation: public Result Execute( ExternalCommandData commandData, ref string message, ElementSet elements ) { var app = commandData.Application; var doc = app.ActiveUIDocument.Document; Result rc; string s = string.Empty; try { var roomCol = new FilteredElementCollector( doc ) .OfClass( typeof( SpatialElement ) ); foreach( var e in roomCol ) { var room = e as Room; if( room == null ) continue; if( room.Location == null ) continue; var sebOptions = new SpatialElementBoundaryOptions { SpatialElementBoundaryLocation = SpatialElementBoundaryLocation.Finish }; var calc = new Autodesk.Revit.DB .SpatialElementGeometryCalculator( doc, sebOptions ); var results = calc .CalculateSpatialElementGeometry( room ); // To keep track of each wall and // its total area in the room var walls = new Dictionary<string, double>(); foreach( Face face in results.GetGeometry().Faces ) { foreach( var subface in results .GetBoundaryFaceInfo( face ) ) { if( subface.SubfaceType != SubfaceType.Side ) { continue; } var wall = doc.GetElement( subface .SpatialBoundaryElement.HostElementId ) as HostObject; if( wall == null ) { continue; } var grossArea = subface.GetSubface().Area; if( !walls.ContainsKey( wall.UniqueId ) ) { walls.Add( wall.UniqueId, grossArea ); } else { walls[wall.UniqueId] += grossArea; } } } foreach( var id in walls.Keys ) { var wall = (HostObject) doc.GetElement( id ); var openings = CalculateWallOpeningArea( wall, room ); s += string.Format( "Room: {2} Wall: {0} Area: {1} m2\r\n", wall.get_Parameter( BuiltInParameter.ALL_MODEL_MARK ).AsString(), SqFootToSquareM( walls[id] - openings ), room.get_Parameter( BuiltInParameter.ROOM_NUMBER ).AsString() ); } } TaskDialog.Show( "Room Boundaries", s ); rc = Result.Succeeded; } catch( Exception ex ) { TaskDialog.Show( "Room Boundaries", ex.Message + "\r\n" + ex.StackTrace ); rc = Result.Failed; } return rc; } /// <summary> /// Convert square feet to square meters /// with two decimal places precision. /// </summary> private static double SqFootToSquareM( double sqFoot ) { return Math.Round( sqFoot * 0.092903, 2 ); } /// <summary> /// Calculate wall area minus openings. Temporarily /// delete all openings in a transaction that is /// rolled back. /// </summary> private static double CalculateWallOpeningArea( HostObject wall, Room room ) { var doc = wall.Document; var wallAreaNet = wall.get_Parameter( BuiltInParameter.HOST_AREA_COMPUTED ).AsDouble(); var t = new Transaction( doc ); t.Start( "Temp" ); foreach( var id in wall.FindInserts( true, true, true, true ) ) { var insert = doc.GetElement( id ); if( insert is FamilyInstance && IsInRoom( room, (FamilyInstance) insert ) ) { doc.Delete( id ); } } doc.Regenerate(); var wallAreaGross = wall.get_Parameter( BuiltInParameter.HOST_AREA_COMPUTED ).AsDouble(); t.RollBack(); return wallAreaGross - wallAreaNet; } /// <summary> /// Predicate to determine whether the given /// family instance belongs to the given room. /// </summary> static bool IsInRoom( Room room, FamilyInstance f ) { ElementId rid = room.Id; return ( ( f.Room != null && f.Room.Id == rid ) || ( f.ToRoom != null && f.ToRoom.Id == rid ) || ( f.FromRoom != null && f.FromRoom.Id == rid ) ); } Many thanks to Håkon for suggesting, implementing and sharing this with us all! The complete source code, Visual Solution files and add-in manifests for both the original, now outdated, VB.NET and the enhanced C# implementations are hosted by the SpatialElementGeometryCalculator GitHub repository. The improved version by Håkon presented here is release 2015.0.0.3. I look forward to hearing how you can make use of this. Further enhancements are welcome! Enjoy.
http://thebuildingcoder.typepad.com/blog/2015/04/gross-and-net-wall-area-calculation-enhancement-and-events.html
CC-MAIN-2018-17
refinedweb
1,031
50.12
sauna.reload 0.5.5 Instant code reloading for Plone using a fork loop sauna.reload: so that you can finish your Plone development today and relax in sauna after calling it a day - Introduction - Installation - Usage: start Plone in reload enabled manner - Debugging with sauna.reload - Background - Events - Limitations - Troubleshooting - Source - Credits - Changelog - 0.5.5 (2016-02-24) - 0.5.4 (2016-02-24) - 0.5.3 (2014-10-19) - 0.5.2 (2014-10-18) - 0.5.1 (2013-02-20) - 0.5.0 (2012-12-28) - 0.4.3 (2012-04-25) - 0.4.2 (2012-04-24) - 0.4.0 (2012-04-22) - 0.3.3 (2011-10-17) - 0.3.2 (2011-08-21) - 0.3.1 – 2011-08-19 - 0.3.0 (2011-08-12) - 0.2.1 (2011-08-03) - 0.2.0 (2011-08-04) - 0.1.1 (2011-08-03) - 0.1.0 (2011-08-03) Introduction sauna.reload is a developer tool which restarts Plone and reloads your changed source code every time you save a file. The restart is optimized for speed and happens much faster than a normal start up process. - Edit your code - Save - Go to a browser and hit Refresh ➔ your latest changes are active It greatly simplifies your Plone development workflow and gives back the agility of Python. It works with any code. sauna.reload works on OSX and Linux with Plone 4.0 and 4.1. In theory works on Windows, but no one has looked into that yet. “I don’t want to use sauna.reload as I can knit a row while I restart …” “no more do I start a 5 minute cigarette every time Plone restarts for 30 seconds… ok wait, this kind of joy leads to poetry, I’m gonna stop here.” Installation Here are brief installation instructions. Prerequisitements In order to take the advantage of sauna.reload - You know how to develop your own Plone add-ons and basics of buildout folder structure - You know UNIX command line basics - You know how to run buildout - You are running Linux or OSX operating system No knowledge for warming up sauna is needed in order to use this product. Installing buildout configuration The recommended installation configuration is to use ZEO server + 1 client for the development. This method is valid for Plone 4.1 and higher. You can use sauna.reload without separate ZEO database server, using instance command, but in this case you’ll risk database corruption on restarts. Below is the recommended approach how to enable sauna.reload for your development environment. However, since this product is only for development use the data loss should not be a big deal. Add sauna.reload to your buildout.cfg file: buildout.cfg: [buildout] eggs += sauna.reload # This section is either client1 / instance depending # on your buildout [instance] zope-conf-additional = %import sauna.reload OSX notes If you are using vim (or macvim) on OSX, you must disable vim’s writebackups to allow WatchDog to see your modifications (otherwise vim will technicallyj create a new file on each save and WatchDog doesn’t report the modification back to sauna.reload). So, Add the following to the end of your .vimrc: set noswapfile set nobackup set nowritebackup Similar issues have been reported with some other OSX-editors. Tips and fixes for these are welcome. Ubuntu / Debian / Linux notes You might need to raise your open files ulimit if you are operating on the large set of files, both hard and soft limit. 104000 is a known good value. If your ulimit is too low you’ll get very misleading OSError: No space left on device. Usage: start Plone in reload enabled manner To start Plone with reload functionality you need to give special environment variable RELOAD_PATH for your client1 command: # Start database server bin/zeoserver start # Start Plone with reload enabled RELOAD_PATH=src bin/client1 fg Or if you want to optimize load speed you can directly specify only some of your development products: RELOAD_PATH=src/my.product:src/my.another.product bin/client1 fg Warning If other products depend on your product, e.g CMFPlone dependencies, sauna.reload does not kick in early enough and the reload does not work. When reload is active you should see something like this in your console when Zope starts up: 2011-08-10 13:28:59 INFO sauna.reload Starting file monitor on /Users/moo/code/x/plone4/src 2011-08-10 13:29:02 INFO sauna.reload We saved at least 29.8229699135 seconds from boot up time 2011-08-10 13:29:02 INFO sauna.reload Overview available at: 2011-08-10 13:29:02 INFO sauna.reload Fork loop starting on process 14607 2011-08-10 13:29:02 INFO sauna.reload Booted up new new child in 0.104816913605 seconds. Pid 14608 … and when you save some file in src folder: 2011-08-10 13:29:41 INFO SignalHandler Caught signal SIGINT 2011-08-10 13:29:41 INFO Z2 Shutting down 2011-08-10 13:29:42 INFO SignalHandler Caught signal SIGCHLD 2011-08-10 13:29:42 INFO sauna.reload Booted up new new child in 0.123936891556 seconds. Pid 14609 CTRL+C should terminate Zope normally. There might be stil some kinks and error messages with shutdown. Note Your reloadable eggs must be included using z3c.autoinclude mechanism. Only eggs loaded through z3c.autoinclude can be reloaded. Make sure you don’t use buildout.cfg zcml = directive for your eggs or sauna.reload silently ignores changes. Manual reload There is also a view on Zope2 root from which it is possible to manually reload code: Debugging with sauna.reload Regular import pdb; pdb.set_trace() will work just fine with sauna.reload and using ipdb as a drop-in for pdb will work fine as well. When reloads happen while in either pdb or ipdb, the debugger will get killed. To avoid losing your terminal echo, because of reload unexpectedly killing your debugger, you may add the following to your ~/.pdbrc: import termios, sys term_fd = sys.stdin.fileno() term_echo = termios.tcgetattr(term_fd) term_echo[3] = term_echo[3] | termios.ECHO term_result = termios.tcsetattr(term_fd, termios.TCSADRAIN, term_echo) As ipdb extends pdb, this configuration file will also work to restore the terminal echo. sauna.reload also should work nicely with PdbTextMateSupport and PdbSublimeTextSupport. Unfortunately, we haven’t seen it working with vimpdb yet. Background sauna.reload is an attempt to recreate plone.reload without the issues it has. Like being unable to reload new grokked views or portlet code. This project was started on Plone Sauna Sprint 2011. There for the name, sauna.reload. sauna.reload does reloading by using a fork loop. So actually it does not reload the code, but restarts small part of Zope2. That’s why it can it reload stuff plone.reload cannot. It does following on Zope2 startup: - Defers loading of your development packages by hooking into PEP 302 loader and changing their z3c.autoinclude target module (and monkeypatching fiveconfigure/metaconfigure for legacy packages). - Starts a watcher thread which monitors changes in your development py-files - Stops loading of Zope2 in zope.processlifetime.IProcessStarting event by stepping into a infinite loop; Just before this, tries to load all non-developed dependencies of your development packages (resolved by z3c.autoinclude) - It forks a new child and lets it pass the loop - Loads all your development packages invoking z3c.autoinclude (and fiveconfigure/metaconfigure for legacy packages). This is fast! - And now every time when the watcher thread detects a change in development files it will signal the child to shutdown and the child will signal the parent to fork a new child when it is just about to close itself - Just before dying, the child saves Data.fs.index to help the new child to see the changes in ZODB (by loading the saved index) - GOTO 4 Internally sauna.reload uses WatchDog Python component for monitoring file-system change events. See also Ruby guys on fork trick. Events Note The following concerns you only if your code needs to react specially to reloads (clear caches, etc.) sauna.reload emits couple of events during reloading. - sauna.reload.events.INewChildForked - Emited immediately after new process is forked. No development packages have been yet installed. Useful if you want to do something before your code gets loaded. Note that you cannot listen this event on a package that is marked for reloading as it is not yet installed when this is fired. - sauna.reload.events.INewChildIsReady - Emitted when all the development packages has been installed to the new forked child. Useful for notifications etc. Limitations sauna.reload supports only Plone >= 4.0 for FileStorage and Plone >= 4.1 for ZEO ClientStorage. Does not handle dependencies sauna.reload has a major pitfall. Because it depends on deferring loading of packages to be watched and reloaded, also every package depending on those packages should be defined to be reloaded (in RELOAD_PATH). And sauna.reload doesn’t resolve those dependencies automatically! Forces loading all dependencies An another potential troublemaker is that sauna.reload performs implicit <includeDependencies package="." /> for every package in RELOAD_PATH (to preload dependencies for those packages to speed up the reload). This may break up some packages. Does not work with core packages We are sorry that sauna.reload may not work for everyone. For example, reloading of core Plone packages could be tricky, if not impossible, because many of them are explicitly included by configure.zcml of CMFPlone and are not using z3c.autoinclude at all. You would have to remove the dependency from CMFPlone for development to make it work… Product installation order is altered Also because the product installation order is altered (by all the above) you may find some issue if your product does something funky on installation or at import time. Please report any other issues at Github issue tracker. Troubleshooting Report all issues on GitHub. My code does not reload properly You’ll see reload process going on in the terminal, but your code is still not loaded. You should see following warnings with zcml-paths from your products: 2011-08-13 09:38:12 ERROR sauna.reload.child Cannot reload src/sauna.reload/sauna/reload/configure.zcml. Make sure your code is hooked into Plone through z3c.autoinclude and NOT using explicit zcml = directive in buildout.cfg. - Retrofit your eggs with autoinclude support if needed - Remove zcml = lines for your eggs in buildout.cfg - Rerun buildout (remember bin/buildout -c development.cfg) - Restart Plone with sauna.reload enabled I want to exclude my meta.zcml from reload It’s possible to manually exclude configuration files from reloading by forcing them to be loaded before forkloop in a custom site.zcml. Be aware, that when site-zcml option is used, zope2instance ignores zcml and zcml-additional options. Define a custom site.zcml in your buildout.cfg with: [instance] recipe = plone.recipe.zope2instance ... site-zcml = <configure xmlns="" xmlns: <include package="Products.Five" /> <meta:redefinePermission <five:loadProducts <!-- Add include for your package's meta.zcml here: --> <include package="my.product" file="meta.zcml" /> <five:loadProducts /> <five:loadProductsOverrides /> <securityPolicy component="Products.Five.security.FiveSecurityPolicy" /> </configure> I want to exclude ALL meta.zcml from reload Sure. See the tip above and use the snippet below instead: [instance] recipe = plone.recipe.zope2instance ... site-zcml = <configure xmlns="" xmlns: <include package="Products.Five" /> <meta:redefinePermission <five:loadProducts <!-- Add autoinclude-directive for deferred meta.zcml here: --> <includePlugins package="sauna.reload" file="meta.zcml" /> <five:loadProducts /> <five:loadProductsOverrides /> <securityPolicy component="Products.Five.security.FiveSecurityPolicy" /> </configure> sauna.reload is not active - nothing printed on console Check that your buildout.cfg includes zope-conf-additional line. If using separate development.cfg make sure you run your buildout using it: bin/buildout -c development.cfg Credits - Esa-Matti Suuronen [esa-matti aet suuronen.org] - Asko Soukka [asko.soukka aet iki.fi] - Mikko Ohtamaa (idea, doccing) - Vilmos Somogyi (logo). The logo was originally the logo of Sauna Sprint 2011 and it was created by Vilmos Somogyi. - Martijn Pieters for teaching us PEP 302 -loader trick at Sauna Sprint 2011. - Yesudeep Mangalapilly for creating WatchDog component and providing support for Sauna Sprint team using it Thanks to all happy hackers on Sauna Sprint 2011! 300 kg of beer was consumed to create this package (at least). Also several kilos of firewood, one axe, one chainsaw and one boat. We still need testers and contributors. You are very welcome! Changelog 0.5.5 (2016-02-24) - Add another workaround for watchdog raising annoying KeyErrors [datakurre] 0.5.4 (2016-02-24) - Add workaround for watchdog raising annoying KeyErrors [datakurre] 0.5.3 (2014-10-19) - Fix issue where in Zope version >= 2.13.19 the storage is explicitly closed on shutdown. This caused an attribute error on reload, since the _rpc_mgr was None. Test for is_connected before closing connection and cache. [sunew] 0.5.2 (2014-10-18) - Log ‘cannot reload’ as warning instead of error to avoid triggering postmortem debugging [sunew] 0.5.1 (2013-02-20) - Fix IOError when sauna.reload is imported for a forked process [datakurre] 0.5.0 (2012-12-28) - Prepopulate platform.uname, before it gets lost in the stack. All calls using platform.uname() values would fail with an IOError(10). [esteele] - Fixed reloadings of Products-namespace eggs on Plone 4.1 and Zope 2.13 [miohtama] - Log out the reason what killed the child [miohtama] 0.4.3 (2012-04-25) - Updated README about ZEO-client support to be limited to Plone >= 4.1. 0.4.2 (2012-04-24) - Fixed BlogStorage-support (regression from 0.3.3). [datakurre] 0.4.0 (2012-04-22) - Added log-messages about successfully reloaded configuration files. Closes [datakurre] - Added support for reloading a ZEO-client on Plone >= 4.1. Fixes [datakurre] - Added support to p.a.themingplugins when p.a.theming is installed. Fixes [datakurre] - Fixed an issue of zope2.Public not mapped to zope.Public when confiuring products during reload. Fixes [datakurre] - Fixed to depend on watchdog >= 0.6.0 to support OSX out-of-the-box. [datakurre] 0.3.3 (2011-10-17) - Fixed an issue in initializing more than one reloaded Archetype products. 0.3.2 (2011-08-21) - Fixed to work better with PdbSublimeText/TextMateSupport. - README-updates (mainly for exluding zcmls from reload). 0.3.1 – 2011-08-19 - Support for Five-initialized products (e.g. Archetypes) on Plone 4.1. - Licensed under ZPL. 0.3.0 (2011-08-12) - Support Plone 4.1 - Heavily edited readme [miohtama] - Show nice error log message if product installation failed to defer ie. is not reloadable. 0.2.1 (2011-08-03) - Add INewChildIsReady event and move INewChildForked event - Remove overly used prints and use logging - Can now reload __init__.py from package root 0.2.0 (2011-08-04) - Support for Five-initialized products (e.g. Archetypes). - Browser View for manual reloading - Emit INewChildForked events - Many bug fixes 0.1.1 (2011-08-03) - Added missing egg description. - Added forgotten reload for localizations. - Prefixed commands in README with $. 0.1.0 (2011-08-03) - First experimental release. - Author: Asko Soukka - Keywords: plone reload zope2 restart developer - License: ZPL - Categories - Package Index Owner: datakurre, epeli - DOAP record: sauna.reload-0.5.5.xml
https://pypi.python.org/pypi/sauna.reload
CC-MAIN-2017-22
refinedweb
2,549
61.63
rand, rand_r, srand - pseudo-random number generator #include <stdlib.h> int rand(void); void srand(unsigned seed); rand() function shall compute a sequence of pseudo-random integers in the range [0, {RAND_MAX}] [XSI] with a period of at least 232.with a period of at least 232. [CX] The rand() function need not be reentrant. A function that is not required to be reentrant is not required to be thread-safe.The rand() function need not be reentrant. A function that is not required to be reentrant is not required to be thread-safe. [TSF] The rand_r() function shall compute a sequence of pseudo-random integers in the range [0, {RAND_MAX}]. (The value of the {RAND_MAX} macro shall be at least 32767.)The rand_r() function shall compute a sequence of pseudo-random integers in the range [0, {RAND_MAX}]. (The value of the {RAND_MAX} macro shall be at least 32767.) If rand_r() is called with the same initial value for the object pointed to by seed and that object is not modified between successive returns and calls to rand_r(), the same sequence shall be generated. The srand() function uses the argument as a seed for a new sequence of pseudo-random numbers to be returned by subsequent calls to rand(). If srand() is then called with the same seed value, the sequence of pseudo-random numbers shall be repeated. If rand() is called before any calls to srand() are made, the same sequence shall be generated as when srand() is first called with a seed value of 1. The implementation shall behave as if no function defined in this volume of IEEE Std 1003.1-2001 calls rand() or srand(). The rand() function shall return the next pseudo-random number in the sequence. [TSF] The rand_r() function shall return a pseudo-random integer.The rand_r() function shall return a pseudo-random integer. The srand() function shall not return a value. No errors are defined. Generating a Pseudo-Random Number Sequence The following example demonstrates how to generate a sequence of pseudo-random numbers.#include <stdio.h> #include <stdlib.h> ... long count, i; char *keystr; int elementlen, len; char c; ... /* Initial random number generator. */ srand(1); /* Create keys using only lowercase characters */ len = 0; for (i=0; i<count; i++) { while (len < elementlen) { c = (char) (rand() % 128); if (islower(c)) keystr[len++] = c; } keystr[len] = '\0'; printf("%s Element%0*ld\n", keystr, elementlen, i); len = 0; } Generating the Same Sequence on Different Machines The following code defines a pair of functions that could be incorporated into applications wishing to ensure that the same sequence of numbers is generated across different machines.static unsigned long next = 1; int myrand(void) /* RAND_MAX assumed to be 32767. */ { next = next * 1103515245 + 12345; return((unsigned)(next/65536) % 32768); } void mysrand(unsigned seed) { next = seed; } The drand48() function provides a much more elaborate random number generator. The limitations on the amount of state that can be carried between one function call and another mean the rand_r() function can never be implemented in a way which satisfies all of the requirements on a pseudo-random number generator. Therefore this function should be avoided whenever non-trivial requirements (including safety) have to be fulfilled. The ISO C standard rand() and srand() functions allow per-process pseudo-random streams shared by all threads. Those two functions need not change, but there has to be mutual-exclusion that prevents interference between two threads concurrently accessing the random number generator. With regard to rand(), there are two different behaviors that may be wanted in a multi-threaded program: - A single per-process sequence of pseudo-random numbers that is shared by all threads that call rand() - A different sequence of pseudo-random numbers for each thread that calls rand() This is provided by the modified thread-safe function based on whether the seed value is global to the entire process or local to each thread. This does not address the known deficiencies of the rand() function implementations, which have been approached by maintaining more state. In effect, this specifies new thread-safe forms of a deficient function. None. drand48(), the Base Definitions volume of IEEE Std 1003.1-2001, <stdlib.h> First released in Issue 1. Derived from Issue 1 of the SVID. The rand_r() function is included for alignment with the POSIX Threads Extension. A note indicating that the rand() function need not be reentrant is added to the DESCRIPTION. Extensions beyond the ISO C standard are marked. The rand_r() function is marked as part of the Thread-Safe Functions option.
http://www.opengroup.org/onlinepubs/009695399/functions/rand.html
crawl-001
refinedweb
760
63.29
Convert List To TensorFlow Tensor Convert a python list into a TensorFlow Tensor using the TensorFlow convert_to_tensor functionality < > Code: Transcript: This video will show you how to convert a Python list into a TensorFlow tensor using the tf.convert_to_tensor functionality. First, we import TensorFlow as tf. import tensorflow as tf Then we print out the version of TensorFlow we are using. print(tf.__version__) We are using TensorFlow 1.8.0. Next, let’s start out by defining a Python list that’s composed of interior lists and we assign this list to the Python variable initial_python_list. initial_python_list = [ [ [1,2,3,4], [5,6,7,8] ], [ [9,10,11,12], [13,14,15,16] ] ] Let’s print this initial_python_list variable to see what it looks like. print(initial_python_list) We can see that it is one long set of nested lists. Let’s also do a type check of the variable to make sure it is a Python list. type(initial_python_list) We can see that the class is list, so bingo, it worked! What we want to do now is to convert this Python list to a TensorFlow tensor. To do this, we’ll use the tf.convert_to_tensor operation. tensor_from_list = tf.convert_to_tensor(initial_python_list) So tf.convert_to_tensor, and we pass in our Python list, and the result of this operation will be assigned to the Python variable tensor_from_list. So we didn’t get an error, so let’s see what happens when we print the tensor from the Python list variable. print(tensor_from_list) We see that it’s a tensor. We see that it’s a constant. We see that the shape is 2x2x4. So let’s check it out. So one, two, so that’s the first dimension. Then two rows. That’s the second dimension. And one, two, three, four columns. That’s the third dimension. And the data type is int32. Here, I used all integers, I didn’t use any floating point numbers, so that makes sense. One interesting thing about this conversion is that when we print it like we just did, we can’t actually see the data. So even though our initial Python list has been converted into a tensor, we can’t actually see the data. This is because we’re still in the building the TensorFlow graph phase rather than the actual evaluating the graph. So now that we’ve created our TensorFlow tensors, it’s time to run the computational graph. sess = tf.Session() We launch the graph in a session. And then we initialize all the global variables in the graph. sess.run(tf.global_variables_initializer()) So now that our variable has been initialized, let’s print our tensor_from_list Python variable again, only this time in a TensorFlow session. print(sess.run(tensor_from_list)) So we do print, and then sess.run, and pass in our tensor_from_list. We can now see all the data that was in the original Python list. Perfect! We were able to convert a Python list into a TensorFlow tensor using the tf.convert_to_tensor functionality.
https://aiworkbox.com/lessons/convert-list-to-tensorflow-tensor
CC-MAIN-2020-40
refinedweb
506
67.15
I'm officially taking a break from thinking. And I'm doing it by taking in some contract work. NDAs are signed, and contracts exchanged, so while I can't really tell you specifics, it's a WordPress site. Nothing so nourishing as my usual fare; this is the proverbial greasy, bacon and cheese pizza for the soul. The situation has blown me away, by the way. Did you know that there's a staggering amount of money out there for enterprising young hacker/graphic-designer dual classes willing to tweak around with CSS and PHP? It's bizarre because there's nothing inherently difficult about this. It's a fairly well documented, intuitively-named, neatly-packaged pile of security-errors-waiting-to-happen and all you really need to do is poke at a couple of places to make the colors show up right and the logo line up with the menu. It feels strange. I know I struggled with this same shit back at school, but it doesn't feel tough any more. It feels a bit like getting past Cinnabar Island with your pack of level 45 motherfuckers and finding yourself back in idyllic Pallet Town tearing through now-helpless level ~3 Pidgeys. That analogy may brand me as an irredeemable nerd, so feel free to substitute a more mainstream level-grinding RPG if you like. My point is, I used to be helpless in this situation, and I am now arranging reality to suit my whim. The odd part given my recent thoughts, the really odd part, is that it pays at all. Let alone well. This is a simple task that anyone with sufficient time, interest and an internet connection (broadband optional) can learn how to do. It's the ultimate expression of freedom brought about by the GPL style of licensing and application design I talked about earlier. Why the ever-loving fuck aren't there more non-moron contractors taking advantage of the situation? Why aren't these jobs being shipped overseas like so many people seem to think all IT should be? If I had to pick one type of development that a language barrier wouldn't cock up, it would be CMS tweakery. Oddly, it's the one type of IT work that doesn't seem to be going anywhere, at least for the short term, because every place I've worked at so far and every client I've contracted with is having some local WordPress/Joomla/Drupal work done[1]. Back to the Free thing though, since that's what I was thinking about prior to my break. How does PHP+WordPress support those freedoms? First off, it actually is GPL2[2] but second off, it hits all of my added requirements, seemingly by accident. 0. components must be hot-swappable 1. required code must be terse (but not past that threshold that takes it to line-noise levels) 2. it must be simple and consistent enough that it doesn't take all of your brainpower over several hours to get into it and make changes (or, you must be able to more or less ignore the rest of the system while making changes to a specific piece) 3. An error can't bring the whole thing crashing down. It needs to toss you an exception gracefully, let you try some stuff to fix the problem dynamically and then continue on its merry way without a restart.-me, last time Components are hot-swappable thanks to how it interacts with Apache[3]. WordPress is terse yet readable, it's consistent enough that I can get into the codebase easily and I can ignore pieces outside the specific widget/CSS component/theme I'm dealing with at whatever granularity. Finally, an error nukes a page, but doesn't crash the entire site (unless it's made in a template, but that's to be expected). Take that, every language I have at least a vague interest in. I read somewhere that the winner of a solution race frequently isn't the best, rather the worst that's still Good Enough™. This ... may sadly be that. Much as I wish a better language had a stronger web presence, PHP seems to be it outside of "actual" programmers[4]. So yeah. I'd prefer the dominating position to be held by a functional, consistent, macro-enabled (or lazy), assertion-capable (or strong-type-inferencing), namespace-enabled language without a deprecated list that includes half its functions, but PHP has floated to the top. And I'm kind of happy about that. Between its pervasiveness[5], GPLv2 licensing of key applications and inherently open code, PHP may be doing more to promote Freedom in the Stallman sense than perhaps any other server-side language. Of course I should probably ding it for also being behind Facebook, but maybe it's time to put a new logo up in the title bar... Footnotes 1 - [back] - I've also seen exactly one SharePoint, which I'm told I should be thankful that I didn't have to maintain 2 - [back] - so it supports the four freedoms 3 - [back] - or whatever server you use 4 - [back] - who are a minority as evidenced by the numbers; WordPress, Joomla and Drupal between them have truly intimidating market share 5 - [back] - in terms of deployment and number of "native speakers"
http://langnostic.blogspot.com/2011/06/ugh.html
CC-MAIN-2013-20
refinedweb
901
67.69
Is there an automatic way to optimize inclusion of header files in C++, so that compilation time is improved ? With the word "automatic" I mean a tool or program. Is it possible to find which headers files are obsolete (e.g exposed functionality is not used) ? Edit: Having each include header "included only once is one important thing, but is there a way to even change the contents of files so that frequently used "functionality" is on specific includes and less frequently used functionality is on other includes? Am i asking too much ? Unfortunately, we are talking about an existing code base with thousands of files. Could it be a refactoring tool what I am actually asking for ? Thank you. I think what you really want is "include what you use" rather than a minimal set of headers. IWYU means forward declare as much as possible, and include headers that directly declare the symbols you use. You cannot mindlessly convert a file to be IWYU clean as it may no longer compile. When that occurs, you need to find the missing header and add it. However, if every file is IWYU clean your compiles will be faster overall even if you have to add headers occasionally. Not to mention you headers will be more meaningful/self-documenting. As my previous answer points out it is technically possible to include even fewer headers than necessary for IWYU, but it's generally a waste of time. Now if only there was a tool to most of the IWYU refactoring grunt work for you :) I had considered a creating/using a tool like this once. The idea is to use binary search and repeated compilation to find the minimal set of includes. Upon further investigation it didn't seem that useful. Some issues: Changing the included header files can change the behavior, and still allow the file to compile. One example in particular, if you defined your own std::swap in a separate header file. You could remove that header and your code would still compile using the default std::swap implementation. However, the std::swap may be: inefficient, cause a runtime error, or worse produce subtly wrong logic. Sometimes a header file inclusion works as documentation. For instance, to used std::foreach, often including <vector> is sufficient to get it to compile. The code is more meaningful with the extra #include <algorithm>. The minimal compilation set may not be portable, between compilers or compiler versions. Using the std::foreach example again, there is no guarantee that std::foreach will provided in by <vector>. The minimal set of includes may not affect compile time significantly anyway. Visual studio and gcc support #pragma once which make repeated included essentially non-existent performance wise. And at least gcc's preprocessor has been optimized to process include guards very fast (as fast as #pragma once).
https://codedump.io/share/f7agdDixQgJ4/1/header-inclusion-optimization
CC-MAIN-2017-26
refinedweb
479
55.74
Are you looking for the best JavaScript IDE for Web development? Then, you are at the right place as we are discussing here the ten best editors for web development. And we are hopeful that you’ll be able to decide the right IDE for JavaScript programming. From this post, you’ll get to learn what are the features a good JavaScript IDE should have. And we’ll also assess each IDE whether it is meeting the present web development requirements and how can it scale to fulfill the future needs. A right combination of features can dramatically reduce coding efforts and increase productivity. Apart from the features, it is even more important to know the complexity of adapting a tool and being proficient in using it. Ideally, the learning curve should not be steep. So that, you can pay attention to the business logic rather decoding the intricacies of a tool. Next, a candidate for the best JavaScript IDE is the one which is easy to customize and extensible. Since there are numerous IDEs for JavaScript development, so it wasn’t easy to keep the list short to have not more than 10 IDEs. And the shortlisting process took a lot of filtering. We considered the following parameters for preparing the list. What are the qualities of a Good IDE to enlist among the top 10? - IntelliSense which includes – Code suggestions, Auto code completion, Parameter info, and List functions. - Smart navigation across namespace/project for – Go to definitions, Go to types and Go to members. - Default code generation. - Code refactoring and coverage. - Auto imports on the addition of related code. - Inline function definition view. - A unified view of files (sources/imports), errors, warnings, and unit tests. - Inbuilt unit test controller. - Integrated debugger. - Quick error/exception source inspection. - Extensible source code control. However, one can’t ignore the statistics as they too indicate the popularity of an IDE. So we also analyzed data like the no. of downloads per IDE, votes/polls available online, search rankings of IDE websites. 1. WebStorm – The Best JavaScript IDE [Paid for Full-featured Version] 2. Atom – The Perfect IDE for the Web and JavaScript 3. Visual Studio Code – The Frontrunner IDE for JavaScript 4. Sublime Text 3 – A Very Powerful Text Editor Turned IDE for JavaScript 5. Brackets – The Open Source Code Editor for Web Development 6. CodeAnyWhere – Cloud IDE for JavaScript and Web Development 7. Cloud9 – Best IDE for JavaScript in the Cloud 8. NetBeans – A Prominent IDE for Web Development in JavaScript 9. IntelliJ IDEA – Multi-purpose IDE to Increase Productivity 10. Eclipse – The Free JavaScript IDE for Web Developers 10 Best Javascript IDE for Web Development 1. WebStorm – The Best JavaScript IDE [Paid for Full-featured Version] WebStorm is undoubtedly the ace among the top IDEs for web development using JavaScript and Angular JS. It is a super lightweight and user-friendly IDE designed to create state of the art web apps. It supports a stack of technologies ranging from JavaScript, HTML, CSS, Angular and Node.JS. And it offers Code completion, Smart navigation, On the fly error discovery, and Code refactoring in all the above languages. It can work with many of the cutting edge frameworks like Angular, React, and Meteor. For mobile development, you can use Cordova, PhoneGap, and Ionic frameworks with it. Next most important feature is its inbuilt debugger which you can use for stepping into your client-side code. You can play it within Chrome and debug your code, add breakpoints, and evaluate expressions. With Karma test runner and Mocha, you can begin unit testing simultaneously with development. Another cool feature is Webstorm secret service i.e. Spy-js. When you don’t have logs to trace, don’t wish to debug either, profiling is also not an option. Then, it’s Spy-js which comes to the rescue. It triggers a node.js server to run as a proxy which intercepts all browser traffic and allows editing of JS file on the fly. Miscellaneous Features. The feature trail is not over yet. WebStorm gives you a single place to run Gulp, Grunt, and NPM tasks within the IDE. Also, use JSHint, ESLint or JSLint to identify issues in real time. You even have the option to create projects from inbuilt Express or Web development project templates. So WebStorm has a never ending list of amazing features. And all of them works together to make it as the best JavaScript IDE for web development. Whether you are a beginner or an experienced JavaScript developer, it’s worth choosing WebStorm for managing small to large projects. 2. Atom – The Perfect IDE for the Web and JavaScript Atom is one of the best JavaScript IDE to supercharge your JavaScript development. It is a modern code editor, flexible and customizable as per development needs. It’s a cross-platform tool which is easy to install on Windows, Linux, and Mac OS X. The latest version has got significant speed improvements and is still free to beat up its commercial competitors. Atom is a desktop-based app developed in HTML, JavaScript, CSS using Node.js server. At its core, Atom uses Electron – a framework to build cross-platform desktop apps using JavaScript. It boasts some of the outstanding features that you will vouch for once you get them tested. To start with is the cross-platform editing which gives you the consistent user experience across all supported platforms. The built-in Package Manager follows next. It simplifies the search of available plugins and installs them on priority. You can even create a custom package of your own. Intelligent autocompletion is something which is inherent into Atom. It makes you write code faster and keep you at the same pace you can think of it. Atom has a user-friendly file system browser to help you in opening a single file, a project or many projects at the same time. Next, you can decompose Atom’s UI into sections, compare and edit files on the fly. Inline find/replace text is another operation you can perform as you write code. Atom also allows altering the way it looks, you can choose from a handful of UI and Syntax themes. It is also easy to hack theme by changing its HTML/CSS. You can further stretch Atom’s ability by adding packages like Minimap to display a crisp view of your code, Auto-close HTML tags, and Linter for in-editor code validation. 3. Visual Studio Code – The Frontrunner IDE for JavaScript Visual Studio Code is a lean, fast, cross-platform and free tool created by Microsoft for web development. It’s one of the rare product from MS that works on Linux and Mac OS too. Also, like Atom, Visual Studio Code is using Electron as its founding platform. It has many salient features that place it among the list of best JavaScript IDE. VStudio Code is a one-stop editor which developers can use for web development in more than 30+ programming languages. The list starts with C#, ASP, VB, JavaScript, CSS/HTML, TypeScript, Ruby, PHP, Less, Saas and goes on. One of the highlighted features of VStudio Code is IntelliSense. It does help a programmer by listing code suggestions, hints, and parameter descriptions. You can enjoy this feature for JavaScript, TypeScript, PHP, Saas, and Less. For others, you may not need to install some extensions. Any modern is incomplete if it lacks a robust debugger. But VS Code has a first-class debugger to debug JavaScript and Node.js applications. You can start the app in debug mode or attach a debugger at runtime. Also, enjoy almost every debug options like setting breakpoints, call stack, watch variables, pause and stepping into code, as you might get from a full-fledged Visual Studio. Miscellaneous Features. VS Code has some other useful functions like Peek which lets you expand the definition of any function in an inline popup just by pressing shift+12. You can enjoy this feature in JavaScript, TypeScript, and a few other programming languages. Another important feature is Task Runner built into VS Code. It lets you create and configure tasks to use Gulp, Grunt or MSBuild. From the command pallet, you can click the task for execution. Two more noteworthy features are CLI and inbuilt Git. With CLI, you can explore the current project or any particular directory. And Git can let you run commands like commit, pull, push, rebase, and publish. You can also monitor the change made to a file. 4. Sublime Text 3 – A Very Powerful Text Editor Turned IDE for JavaScript Sublime Text 3 also known as SBT3 is a great tool for any developer to have in his toolbox. It is an advanced text editor, highly customizable, cross-platform and close to challenging any JavaScript IDE at the top. SBT3 is a clean, ubiquitous, and fast code editor. Not only does it comes with some pre-installed out of the box features but also has an extensible plugin architecture to make it look great. Talking about a few of SBT3’s inbuilt features, they are Go to definition, Go to Symbol, enhanced pane-management, and an almost nil load time. With plugins like Package Control in SBT3, it is now very easy to download and install any additional plugin you need. Another useful add-on is Emmet LiveStyle which enables instant updates of what you type in real-time. It frees you from reloading or saving a file. There is also Babel which does the syntax highlighting for your ES6 (ECMA Script 6) and REACT JSX code. After adding this plugin, do not miss to set it as default local for .es6/.js6/.js files. Other plugins you should use to extend STB3 are SublimeLinter to enforce proper styling, JSFormat for auto format JS and JSON, DocBlockr to simplify commenting, and SideBar Enhancements to edit files from the sidebar. And probably, you won’t like to miss out the largest package i.e. AngularJS. It brings features like Code completion, Directive completion, Search panel for instant keyword look up and Go to docs for Angular JS directive. Also, get your hands dirty with TypeScript which is a superset of JavaScript and allows static typing, classes, and interfaces. 5. Brackets – The Open Source Code Editor for Web Development Brackets is a modern front-end development tool for web developers. It is a work of Adobe Systems who created it as open source under an MIT license. Its developers conceived it to be a lightweight and fast IDE focusing on the web design. That’s why they created using HTML, CSS, and JavaScript. Though, it is suitable for coding in any programming language but optimized for JavaScript, HTML/CSS, Sass, Less, and CoffeeScript. Like any other IDE, Brackets also provides some of the key built-in features as well as supported by 3rd party extensions. The coolest built-in feature is Live Preview which opens a new Chrome window and renders the changes immediately as you type. It is more like a Web inspector in Chrome but a self-contained editor in real. Another feature is Quick edit which depends on the context in question. It activates an inline editor to modify CSS by pressing the Ctrl+E shortcut key. You can also view a small swatch of a color by hovering over it. And using the same key (Ctrl+E), you can edit the color selector. JSLint is an inbuilt feature and gets your JavaScript file verified as you save it. Brackets inherently support code completion which works faster than any other IDE does it. Like other code editors, we can expand Brackets ability by installing extensions. It has a clean and clutter free interface to search and add any extension of your choice. You can go for a few like code-folding, snippets, and smart highlighting. To debug JavaScript, you should install Theseus which works with both Node.js and Chrome. A few more useful extensions are Markdown Preview, Autoprefixer, and Language switcher. 6. CodeAnyWhere – Cloud IDE for JavaScript and Web Development CodeAnyWhere is a lightweight and powerful Web IDE with an integrated FTP client. It enables you to use most of the major programming languages like JavaScript, PHP, HTML, CSS, and XML. This IDE can handle any of your coding tasks and make development more productive and fun. It integrates with many types of files and has all features you would see in desktop editors. - Supports 120+ programming languages syntax - Auto Code completion in JS, PHO, HTML, CSS - Lint tool to verify JS and CSS - Multiple cursors - Zen coding support - Code beautifier - Multi-browser support - Drag/Drop files and folders from a desktop and edit them - Open and save extremely large files CodeAnyWhere has support for various clients such as FTP, SFTP, FTPS, SSH, Dropbox, Amazon S3, Google Drive, GitHub, and Bitbucket. So it is easy for you to code and deploy without the need of launching 3rd party programs. This Cloud IDE has a sharing feature so you can collaborate with a colleague, involve a group to review your code or let your latest code discover by your friends. CodeAnyWhere will fulfill all your development needs on the go, anytime, anywhere and on any platform. 7. Cloud9 – Best IDE for JavaScript in the Cloud Cloud9 IDE is a fully-featured, web-based, and open-source Integrated Development Environment for JavaScript developers. It lets you write, run, and debug code at anytime from anywhere. It primarily focuses on providing a platform for writing and executing Node.js and JavaScript code. However, the developers can also target Python, Ruby, and Apache+PHP based applications. It is an excellent tool for all UI designers and developers. The IDE came into shape after Ajax.org’s ACE and Mozilla’s Skywriter project merged. It makes use of Ajax technology for fluid user interface and allows refactoring of the Cloud9 UI. It hosts a fast, high-performance text editor for JavaScript, HTML, and CSS. Cloud9 IDE inoculates syntax highlighting in JavaScript, Python, C/C#, Perl, Ruby, Scala and a few more. A no. of other built-in features include multiple cursors, auto code completion, theme selection, search/replace in files, and keyboard shortcuts. It also offers some unique features like the ability to share work in real-time with a person or a group and allow running the IDE on your local server and development environment. And it automatically keeps your online workspace sync with the local copy. If there is anything which it doesn’t support out of the box, then import it from outside. Its plugin system is simple yet powerful. Just use it to bring the desired functionality. It has got nice debugging support. You can do operations like runtime validation, add/remove breakpoints, perform code analysis and inspection. Cloud9 enables end to end web development using both JavaScript and Node.js. It is an essential tool if you are serious about creating Node.js enabled web apps. It is needless to say that Cloud9 is the best JavaScript IDE in the Cloud IDE’s space. 8. NetBeans – A Prominent IDE for Web Development in JavaScript NetBeans is a known name in Java world for web development. Now, it has brought in significant capabilities to support JavaScript and Node.js. In addition to JavaScript, it has full support for HTML5 and CSS3 to build world-class web app. It includes a project wizard for creating Node.js and JavaScript projects. It brings all the basic IDE features on the table like code folding, navigation to types, bracket matching, go to declaration, code formatting, and JSON support. NetBeans allows you to mark occurrences of literals, edit them in place. It also supports code completion and type analysis. Code completion works for all core JavaScript classes like String class and so on. The types get visible during code completion- in the suggestion list, in code hints, and in parameter help. For running and debugging JavaScript and Node.js applications, Netscape has made several changes in its interface. And all these features are very well supported now. It also integrates Mocha JavaScript test framework and Selenium browser framework for both unit and UI testing. NetBeans is indeed a great offering promoted by Oracle. To give it a try, click the below download link. 9. IntelliJ IDEA – Multi-purpose IDE to Increase Productivity IntelliJ IDEA is a premium development tool from JetBrains. It primarily focuses on maximizing a developer productivity. It has an integrated powerful static code analyzer. And it makes use of its ergonomic design to deliver ease of use and productivity. IntelliJ has integrated Version control which can use any of the GIT, CVS or SVN clients. And it supports a no. of programming languages like JavaScript, PHP, HTML, and CSS. It puts features like advanced code completion on the plate. It can suggest hints for class names, methods, fields, and keywords as per the context. IntelliJ provides intelligent code assistance for many languages like JavaScript, CSS, HTML, and SQL. This IDE is so smart that it anticipates your moves and automates any repetitive programming task so that you can remain concentrate on the project. Also, the IntelliJ betters your user experience by automatically adding tools which are relevant to the context. 10. Eclipse – The Free JavaScript IDE for Web Developers Eclipse is a proven tool for creating End to End web applications. It has been around for many years and supported Java as its primary language for development. However, it has a JSDT (JavaScript Development Tools) package to enable the work on JavaScript projects. This plugin adds a new JS project type and perspective to Eclipse IDE along with some other features. The JSDT package adds the same set of features as found in the JDT (Java Development Toolkit). Some of the key features are auto-completion, syntax highlighting, code refactoring, templating and debugging. Also, please note that the JSDT has seen many changes in Eclipse Neon. It added new features like Bower, NPM, JSON editor, ECMAScript 2015 (ES6) parser, Node.js support and JS Build Tools (Gulp/Grunt). Summary – 10 Best Javascript IDE for Web Development. Picking the right IDE for productivity and experience should be the first preference to any programmer out there. With so many options, we tried to review the ten best JavaScript IDE. You can use any of these IDE to work on complex projects and also suggest your pain-points for others to improve. So tell us, which IDE, you select or using? Write to us via comments or email. And if you liked the article, please care to share it across. Best, TechBeamers. What about Chrome DevTools with workspaces? No one of that IDEs not allows edit functions on fly. Chrome devtools can do that.
http://www.techbeamers.com/best-javascript-ide-web-development/
CC-MAIN-2018-26
refinedweb
3,139
65.32
coq-of-ocaml Documentation on Aim coq-of-ocaml aims to enable formal verification of OCaml programs 🦄. The more you prove, the happier you are. By transforming OCaml code into similar Coq programs, it is possible to prove arbitrarily complex properties using the existing power of Coq. The sweet spot of coq-of-ocaml is purely functional and monadic programs. Side-effects outside of a monad, like references, and advanced features like object-oriented programming, may never be supported. By sticking to the supported subset of OCaml, you should be able to import millions of lines of code to Coq and write proofs at large. Running coq-of-ocaml after each code change, you can make sure that your proofs are still valid. The generated Coq code is designed to be stable, with no generated variable names for example. We recommend organizing your proof files as you would organize your unit-test files. The guiding idea of coq-of-ocaml is TypeScript. Instead of bringing types to an untyped language, we are bringing proofs to an already typed language. The approach stays the same: finding the right sweet spot, using heuristics when needed, guiding the user with error messages. We use coq-of-ocaml at Tezos, a crypto-currency implemented in OCaml, in the hope to have near-zero bugs thanks to formal proofs. Tezos is currently one of the most advanced crypto-currencies, with smart contracts, proof-of-stake, encrypted transactions, and protocol upgrades. It aims to compete with Ethereum. Formal verification is claimed to be important for crypto-currencies as there are no central authorities to forbid bug exploits and a lot of money at stake. A Coq translation of the core components of Tezos is available in the project coq-tezos-of-ocaml. Protecting the money. There are still some open problems with coq-of-ocaml, like the axiom-free compilation of GADTs (an ongoing project). If you are willing to work on a particular project, please contact us by opening an issue in this repository. Example Start with the file main.ml 🐫: type 'a tree = | Leaf of 'a | Node of 'a tree * 'a tree let rec sum tree = match tree with | Leaf n -> n | Node (tree1, tree2) -> sum tree1 + sum tree2 Run: coq-of-ocaml main.ml Get a file Main.v 🦄: Require Import CoqOfOCaml.CoqOfOCaml. Require Import CoqOfOCaml.Settings. Inductive tree (a : Set) : Set := | Leaf : a -> tree a | Node : tree a -> tree a -> tree a. Arguments Leaf {_}. Arguments Node {_}. Fixpoint sum (tree : tree int) : int := match tree with | Leaf n => n | Node tree1 tree2 => Z.add (sum tree1) (sum tree2) end. You can now write proofs by induction over the sum function using Coq. To see how you can write proofs, you can simply look at the Coq documentation. Learning to write proofs is like learning a new programming paradigm. It can take time, but be worthwhile! Here is an example of proof: (** Definition of a tree with only positive integers *) Inductive positive : tree int -> Prop := | Positive_leaf : forall n, n > 0 -> positive (Leaf n) | Positive_node : forall tree1 tree2, positive tree1 -> positive tree2 -> positive (Node tree1 tree2). Require Import Coq.micromega.Lia. Lemma positive_plus n m : n > 0 -> m > 0 -> n + m > 0. lia. Qed. (** Proof that if a tree is positive, then its sum is positive too *) Fixpoint positive_sum (tree : tree int) (H : positive tree) : sum tree > 0. destruct tree; simpl; inversion H; trivial. apply positive_plus; now apply positive_sum. Qed. Install Using the OCaml package manager opam, run: opam install coq-of-ocaml Usage The basic command is: coq-of-ocaml file.ml You can start to experiment with the test files in tests/ or look at our online examples. coq-of-ocaml compiles the .ml or .mli files using Merlin to understand the dependencies of a project. One first needs to have a compiled project with a working configuration of Merlin. This is automatically the case if you use dune as a build system. Documentation You can read the documentation on the website of the project at. Supported the core of OCaml (functions, let bindings, pattern-matching,...) ✔️ type definitions (records, inductive types, synonyms, mutual types) ✔️ monadic programs ✔️ modules as namespaces ✔️ modules as polymorphic records (signatures, functors, first-class modules) ✔️ multiple-file projects (thanks to Merlin) ✔️ .mland .mlifiles ✔️ existential types (we use impredicative sets to avoid a universe explosion) ✔️ partial support of GADTs 🌊 partial support of polymorphic variants 🌊 partial support of extensible types 🌊 ignores side-effects outside of a monad ❌ no object-oriented programming ❌ Even in case of errors, we try to generate some Coq code along with an error message. The generated Coq code should be readable and with a size similar to the OCaml source. The generated code does not necessarily compile after a first try. This can be due to various errors, such as name collisions. Do not hesitate to fix these errors by updating the OCaml source accordingly. If you want more assistance, please contact us by opening an issue in this repository. Contribute If you want to contribute to the project, you can submit a pull-requests. Build with opam To install the current development version: opam pin add Build manually Clone the Git submodule for Merlin: git submodule init git submodule update Then read the coq-of-ocaml.opam file at the root of the project to know the dependencies to install and get the list of commands to build the project. License MIT (open-source software) sha256=9dfde155240061a70b803715609246d360605d887035070b160190e21f672dbd sha512=259a034fdff6cc0ca4af935eb07ec4dac7916a199452bec2bccbf5b6e26f6dfbcbd33ba53dfe63d0fc64a9c04763129e76ff6d04ebd53bb24e378b96457651ec
https://ocaml.org/p/coq-of-ocaml/2.5.2+4.13
CC-MAIN-2022-27
refinedweb
914
56.76
With the recent news that Nest’s Revolv was going to shut down its services, I was happy to have chosen open-source for my home automation stack. I’ve been seeing posts crop up on Twitter and Reddit about wanting an “offline” solution (one that works even without the Internet, but still functions within a LAN), and decided to share my experiences with setting up a brand new Raspberry Pi 3 with the open-source Home Assistant platform. As a warning, I must say that I AM still new to this, so I apologize in advance for any errors, but I figured I would share my information as it may help a few people who run across the same issues I had. Hopefully this information doesn’t become too outdated too soon. An open-source home automation platform ensures that your data is safe, secure, and always available when you need it. You control the system, so you are not reliant on any third-party for services or data (which also means you have no one to blame but yourself if something goes wrong!). I picked Home Assistant partially to start learning more Python (I’m mostly a JVM guy), but I’ve heard great things about other frameworks such as OpenHAB. I chose the Pi 3 for its enhanced processor and built-in Wifi/Bluetooth capability. If you prefer slightly less manual configuration (especially for something like Z-Wave), then you may want to check out the Docker image for Home Assistant as well. Before starting setup, there are a few things you may need: – A Raspberry Pi 3 running Raspbian Jessie. The Raspberry Pi site has a great installation guide. – A micro SD card for the Pi. – A power charger/adapter for the Pi (I used an old cell phone charger). – Basic knowledge of Linux Bash commands. – Basic knowledge of YAML. – A mouse, keyboard, and HDMI display + cable (temporarily) – (Optional) Knowledge of Python if you plan on adding features. Also useful for debugging. If you don’t have all of these, I’d recommend spending the ~$50 to purchase the materials, and a few weekends to brush up on the above. To start off, I’m going to assume that you can boot into your Raspberry Pi and you have Raspbian Jessie running (Home Assistant recommends this version as it has Python 3.4 already installed). You should hopefully have a mouse, keyboard, and display to interface with it. The very first thing you’ll want to do is configure it for your locale and timezone. Next you can should change your password. Finally, we’ll connect to Wifi (if you’re not wired) and update the OS from the terminal. After all of these you may need to restart. Here is a cheatsheet: Ideally we would like to control the Pi from our main machine(s), rather than having to log into every time. If you don’t mind a dedicated “command station” for the Pi, then you can skip this part. First we’ll install the TightVNC server: To set up a password and make sure it works, you can run: Next up, we’ll set it up to run automatically on boot (otherwise we have to manually login to the Pi after each reboot): I’ve left a few lines in there that are commented out that seemed to cause issues (or were unnecessary) for me, but seemed like it may help in some configurations. As a note, you will probably want to configure the line that starts with “ExecStart” to match the geometry (resolution), depth (color depth), etc. that you may need. My main machine is 1920×1080, so I opted for a slightly smaller display for the VNC client. The “%i” parameter in this case is equal to the “:1” (the display option) that we will pass to the service. So our next steps are to allow that file to be executed on start: If you want to have a shared clipboard (so you can cut/copy/paste between your host machine and your Pi), then you need to install “autocutsel”: Then add this line to your “.vnc/xstartup”: The last step is to download the TightVNC client on your main machine, enter the IP and password, and connect! If you still have any issues, the Raspberry Pi site has a decent resource (although I could not seem to get their autostart instructions working on my Pi). Make sure to uncheck “Disable clipboard transfer” on your VNC client’s configuration to enable clipboard sharing. One thing we’ll be doing a lot with Home Assistant (at least initially), is editing the configuration file. Even with TightVNC, sometimes it’s easier to open up the files on your local machine. As a result, we will set up file sharing for the Pi so other machines on the network can edit the files. First we will want to install Samba: Then we’ll need to configure it by uncommenting/adding a couple of lines for Windows support, adding a block of text of our own, and adding a couple lines if we need symlink support: Symlink support is useful if you plan on having a single “Share” or “Public” folder, and linking (“ln -s src dest”) folders that you want to share into it. Lastly we will set a password: Then you can install/configure a client to access the files. If you have a Windows machine and configured the share for it, then your Pi’s files should show up under the “Network” setting. Now we can change the Pi’s files whenever we want! Ok, hopefully the previous steps weren’t (too) bad, and we’re now ready to dive into the fun stuff… installing our home automation framework! As usual, our first step is to install and run Home Assistant: Running the “hass –open-ui” command will generate a lot of files that will be useful (and also display an intro message), so be sure to do that even if you plan on automatically starting Home Assistant on boot. At this point, feel free to open up your hass page (on port 8123 of your Pi’s IP) and play around with it. Next we’ll set it to boot on start very similarly to how we did for TightVNC: Note that you may need to change the “ExecStart” to point at your hass install. Mine was in a different location than what Home Assistant had listed on their site. Finally, if you changed your username, then be sure that is what follows the “@” in the file name. And again we will need to configure it to autostart: You can call “sudo systemctl stop home-assistant@pi.service” to stop the service, or with “start” to boot it up again. “restart” is useful if you made changes and need to restart the server. Finally, you can run “sudo journalctl -f -u home-assistant@pi” to view a log of your Home Assistant service. I like to keep this in its own terminal for easy debugging through the VNC. If you have any issues, the Home Assistant site actually has some very good instructions for general setup and autostarting. At this point, you should have everything you need running! No devices/sensors are yet configured, but you can look over the “Components” page on Home Assistant to see what suits your needs. I’m personally a fan of Z-Wave as it does not require any third-party services. I’m excited about home automation and willing to share what knowledge I have. I am including part of a copy of my configuration (working as of Home Assistant 0.17.2). The only notable feature thus far is that I have it configured to send me a text when the washer/dryer is done, thanks to Google Voice, a Gen5 Z-Wave Z-Stick, and a couple of Z-Wave Energy Switches. It may not be perfect (and I should probably move it into a couple of scripts), but again, I’m eager to share and contribute to the open-source home automation community. Igor Shults One thought on “Setting Up the Raspberry Pi 3 for Home Assistant” I am having a problem with running : sudo pip3 install homeassistant Traceback (most recent call last): File “/usr/bin/pip3”, line 9, in load_entry_point(‘pip==1.5.6’, ‘console_scripts’, ‘pip3’)() [‘__name__’]) File “/usr/lib/python3/dist-packages/pip/__init__.py”, line 74, in from pip.vcs import git, mercurial, subversion, bazaar # noqa File “/usr/lib/python3/dist-packages/pip/vcs/mercurial.py”, line 9, in from pip.download import path_to_url File “/usr/lib/python3/dist-packages/pip/download.py”, line 25, in from requests.compat import IncompleteRead ImportError: cannot import name ‘IncompleteRead’ Any help appreciated. Thanks Looks like it may be an issue with your pip version. Maybe try: `sudo easy_install –upgrade pip` and then re-run the install command? I hadn’t thought about notifications for the dryer/washer, thanks! great work, thank you very much
https://objectpartners.com/2016/04/12/setting-up-the-raspberry-pi-3-for-home-assistant/
CC-MAIN-2019-04
refinedweb
1,513
68.91
Contents - View Code Without Source - Debug Code Without Source - View Code Signature - Manage Bookmarks - Extend Reflector - Filter Code - Filter Navigation Tree - Navigate to Code from Code - Navigate to Code from Documentation - Search Code - Copy Code - More… Part 2: Meat and Potatoes. Figure 1 .NET Reflector desktop layout. The Assembly Browser doubles as the navigation panel; other panels (those with white labels) are auto-populated based on its selection. The panels with black text are either independent of the assembly browser’s selection or not auto-populated. View Code Without Source Reflector Desktop: Load an assembly or executable, then drill down in the assembly list to the item of interest and select it. The source code for that item appears in the source code panel. The source code is, itself, instrumented with live links for code both within the same assembly and outside the assembly. Selecting any such item takes you to that item’s source code. VS Extension: Once you have loaded an assembly as a project reference, you have access to its source code in different ways. (1) In the .NET Reflector object browser (not the VS object browser!) drill down in the assembly list to the item of interest and select it. Press F12 (if you have enabled that shortcut key on the .NET Reflector menu) or double-click the item to materialize the source code in a new tab, just like any other source code file. (2) Select an item in your source code and press F12 (or select GoTo Definition on its context menu). Debug Code Without Source VS Extension: With Reflector’s Visual Studio plugin, you can examine, step into, and set breakpoints on code for which you do not have source files! This particular feature arguably provides Reflector’s most important benefit. Decompiled source code becomes available to you just as if the generated source code files were part of your project. Step into foreign code from breakpoints in your own code, or open up the foreign code from .NET Reflector’s object browser and set breakpoints directly in the foreign code. In the first part of this series I discussed both automatic and manual techniques for decompiling assemblies. Once that is done, stepping into third-party code and setting breakpoints is as straightforward as it is in your own code. For another perspective on this, I encourage you to review Reflector’s own walkthrough that provides an illustrated step-by-step guide. In many cases, debugging foreign code will be as simple as the walkthrough indicates. But there will sometimes be obstacles. According to Bart Read’s .NET Reflector doesn’t decompile some methods correctly, Reflector might fail to decompile a particular method or generate code that does not quite match the original for a number of reasons: - The code has been obfuscated. - The compiled IL could be decompiled to more than one source code construct with equivalent functionality. - The IL may look very different to the code produced by VB and C# compilers. This is particularly true for declarative languages.. - You haven’t selected the correct .NET Framework optimization level. - The method is an unmanaged method called using P/Invoke. One other source of difficulty lies with optimized assemblies, that you would typically get with commercial products. According to Clive Tong’s Debugging the debugging experience, optimized code can present a challenge. He provides a detailed description of his investigation into this issue (which I won’t repeat here), culminating in a simple recipe to strip away the optimization: - Set the environment variable COMPLUS_ZAPDISABLE to a value of 1 in the target process before the CLR starts. - Add an INI file next to the relevant assembly to ensure that optimization is turned down. The MSDN article Making an Image Easier to Debug provides further detail about this last step and provides this simple example: Assume you want to debug the assembly called MyApp.exe. Create an INI file-just an old-style (pre-.NET) configuration file format-and associate it with the assembly by giving it the same base file name and an ini extension (thus, MyApp.ini) and placing it in the same folder as MyApp.exe. The INI file should contain a section for .NET Framework Debugging Control with values for GenerateTrackingInfo and AllowOptimize, to wit: For more on this, Adam Driscoll provides a detailed post putting this recipe into practice. Looking forward to the imminent release of Reflector version 8, it gets even simpler to access decompiled code. If, for example, you are debugging an executable without source, Reflector generates decompiled code automatically when you hit an exception. Also, you will be able to click in the call stack and use it to navigate to the decompilation of any of the stack frames. Setting a breakpoint generates a PDB in the background. View Code Signature Reflector Desktop: Hover over the item of interest in the code panel; the code signature appears in a tooltip (Figure 2, top). Alternately, if you drill down to that object in the assembly list, the signature panel just below it will provide the same information (Figure 2, bottom). Of course, you have the signature visible in the code panel as well, but that can often be scrolled out of view. Figure 2 Hovering over an item in the decompiled code window (top) reveals the item’s signature. Alternately, drill down to that item and the signature panel (bottom) displays this information as well. VS Extension: Hover over the item of interest in a source code window; the code signature appears in a tooltip (note that this is standard Visual Studio behavior, not Reflector-specific). Manage Bookmarks Reflector Desktop: .NET Reflector has a bookmark manager similar to those you find in a web browser. - Create a bookmark either with the context menu on an item in the Assembly Browser (menu >> Bookmark) or with the keyboard (Ctrl+K). - View bookmarks by opening the bookmark manager from the menu (View >> Bookmarks), from the toolbar ( ), or from the keyboard (F2). - Go to a bookmark by selecting an item in the bookmark manager and then pressing return or just by double-clicking an item in the bookmark manager. Alternatively, Reflector also lets you create bookmarks to store and organize in your favorite browser or other bookmark manager. Create and load the clipboard with such an exportable bookmark either with the menu (Edit >> Copy Code Identifier) or with the keyboard (Ctrl+Alt+C) for the currently selected item in the assembly browser. You can then paste that URI into a bookmark manager of your choice. Upon activating such a bookmark, it launches .NET Reflector and automatically drills down to the target item. VS Extension: Use Visual Studio’s own bookmarking facilities; these are most easily accessed from Visual Studio’s Text Editor toolbar (Figure 3). Figure 3 Bookmark controls on Visual Studio’s Text Editor toolbar. Extend Reflector Reflector Desktop: Reflector provides an easy way to extend its capabilities with Add-Ins. These are easy to wire up. Open the add-in manager (Tools >> Add-Ins) then select the add ( ) button to open a standard open file dialog. (Note: At the time of writing this dialog always opens in Reflector’s program directory and you immediately see a subdirectory named Addins. Ignore it. It’s best not to put your add-ins there. It’s for Red Gate-supplied add-ins. You can keep your add-ins anywhere you like. Just not there:-)). Select an add-in that you have downloaded (or built!) and select Open. The add-in is installed in Reflector and displayed on the add-in list-no restart is required. To remove an add-in, simply press the subtract ( ) button in the add-in manager. I recommend you collect all your add-ins in one directory so you can then register or unregister them with the add-in manager without having to hunt through different folders. To find add-ins, start with the add-in manager itself; it provides a button to take you directly to Red Gate’s recommended add-in gallery in your browser. The other major source of add-ins is on Codeplex. To create your own add-ins, start with An Introduction to Building .NET Reflector Add-ins to begin learning about .NET Reflector’s extensive add-in framework. Towards the end of that overview, you will find a further reading list to help you on your way. Filter Code Reflector Desktop: By default, you see all the code associated with the currently selected item, public or otherwise, when you decompile. But if you go to the visibility option (Tools >> Options >> Browser >> Visibility) you can change this All Items default to Public Items Only if desired. The typical scenario of examining a foreign library begins with entering through a call in your own code. So, selecting Public Items Only can reduce the clutter of what you are examining, at least initially. The second facility for managing clutter comes automatically with Reflector, no settings necessary. If you have selected a class (or type) in the assembly browser, the source panel displays the signatures of its properties and methods along with its fields and nested types.At the very bottom, though, is a link labeled Expand Methods that lets you view the entire class in the same scrolling pane. source panel summarizes the types (classes) in the namespace, and provides a live link labeled Expand Types at the bottom for similar convenience. VS Extension: Within Visual Studio you view files in their entirety. While there is no analogous filtering to that in the Reflector Desktop, Visual Studio provides its own outlining controls, allowing you to expand or contract any explicit code regions (demarcated with #region and #endregion) or implicit code regions (methods, types, namespaces, comment sections, etc.). Version 8 of Reflector will include searching and filtering in the Visual Studio extension. Filter Navigation Tree Reflector Desktop: By default, you see just the items that are local to the selected assembly when you drill down through the navigation tree in the assembly browser. But quite commonly a class will have inherited items as well. When you select a class in the assembly browser, you can then toggle the display of inherited members by context menu (menu >> Toggle Inherited Members) or keyboard (Ctrl+I). You can also change the default to display inherited members via the options dialog (Tools >> Options >> Browser >> Show inherited members). VS Extension: In Visual Studio you only have the items local to the assembly, i.e. inherited items are not displayed. Navigate to Code from Code Reflector Desktop: When you view decompiled code, almost every token is a hyperlink to other code! Figure 4 shows a very typical example. There are 29 non-punctuation tokens (if I counted correctly ) in the figure; the ones in green-17 of them-are active links, either to variables, properties, or methods. Some of these are in my code, some in the .NET framework code. You can click on any of them to take you immediately to that underlying code. If you hold down Ctrl while clicking, the new item will conveniently open in a separate tab so you do not lose your original context. Figure 4 Decompiled code is profligately populated with hyperlinks for easy access to underlying types. VS Extension: From any open code file, place your active cursor within the bounds of any code token. You do not get the convenient uniform highlighting of active links as in the Reflector Desktop because Visual Studio does syntax highlighting with a different focus. Nonetheless, .NET Reflector should provide all the same navigable links within Visual Studio. One big difference is that clicking the item does not take you to it-that would be rather inconvenient in a text editor, after all! There are several possible keystrokes to navigate to the underlying source; open the context menu on an item to see some or all of these depending on your context and whether you have Resharper installed (Figure 5). Figure 5 Choices supplied by Visual Studio, Reflector, and Resharper. Your best bet is usually F12. You will note that two of these have keyboard shortcuts making them the most convenient to use. Though the Go to Decompiled Definition command is Reflector’s own command (as you can tell from the icon in the margin of the context menu), just pressing F12 usually works best. Be aware, though, that if you have Resharper it could occasionally get in the way here as well. Navigate to Code from Documentation Reflector Desktop: Besides navigating to other code elements from code you can also navigate from documentation if you satisfy these conditions: - You have the documentation file (library.XML) for the third-party library (library.DLL). - You installed the documentation file in the same directory as the DLL. - Reflector’s documentation display option is set to Formatted (which is the default). - The documentation includes well-formed references to code items. If you satisfy the first three conditions you will automatically see a documentation panel when you view source code. Both source and documentation panels are actually subpanels of the same main panel. If the fourth condition is satisfied when you scroll through the documentation you will see links that appear just as they would in a web browser. Click on these to navigate directly to that code. Just like navigating from code to code, if you hold down Ctrl while clicking a link in documentation, the new item will conveniently open in a separate tab so you do not lose your original context. VS Extension: Visual Studio displays only native documentation comments, not the resultant formatted documentation, so this capability is not applicable. Search Code Reflector Desktop: Open the search panel by menu (Tools >> Search), toolbar ( ), or keyboard (F3). Reflector’s powerful search facility allows you to search by type (class), member, or string/constant value; Figure 6 shows the result of searching for the same value with each of these filters. Simply select the desired filter at the upper right and the results change immediately. The button at the extreme right is a toggle for getting an exact match, and may be used with each of the three filters. Notice that the search is not just checking the name of the item-for example, searching for “Button” by type includes “CheckBox” in the results. Nor is it limited to checking just the displayed columns in the search panel. Rather, the search encompasses each item in its entirety: if you search for types, the search phrase appearing within any portion of the code comprising the type yields a positive match! Figure 6 Reflector’s search panel lets you filter by type, by member, or by string/constant. Finding that something exists is only half the battle, of course. Once you run a search, you can immediately jump to that item in source code by double-clicking on it. If you hold down Ctrl while double-clicking, the target item will conveniently open in a separate tab so you do not lose your original context. This search examines both plain assemblies as well as those loaded into Reflector in a zipped package. While Reflector’s basic search is useful, with an add-in you can jazz it up a bit. The Assembly Visualizer add-in collection from Denis Markelov provides several graphical perspectives of your assemblies but its assembly browser is its central focus. Once you install the add-in-see the discussion under Extending Reflector-you will have several new context menu entries to access different views provided by Assembly Visualizer. When you select an assembly node, the Browse Assembly command materializes, which takes you to Assembly Visualizer’s assembly browser. Note that Assembly Visualizer uses the same name for this window as Reflector’s own assembly browser because they essentially do the same thing at their core: select an item in either assembly browser and Reflector displays that item immediately in the decompiled code panel. But if you compare Reflector’s assembly browser from Figure 1 to Assembly Visualizer’s assembly browser in Figure 7, their appearances are quite different. Figure 7 Assembly Visualizer’s search panel provides sorting and filtering options down the left side; the right side accumulates each assembly you initiated via the context menu command. Reflector displays the assemblies hierarchically; Assembly Visualizer displays a flat list. But this flat list allows Assembly Visualizer’s search capability to fit right in-just type a phrase in the search box at the top and it immediately filters the list as the figure illustrates. Assembly Visualizer brings several other productivity enhancements as enumerated in the table; they more than make up for the drawback of being in a separate window. At the time of writing, Reflector version 8 is in development with beta versions already available. One of the key new features is a new search capability in Reflector’s own assembly browser! The search-related features are highlighted in the table and marked with “V8”. Table 1 Reflector’s assembly browser vs. Assembly Visualizer’s assembly browser. Figure 8 provides a glimpse of Reflector 8’s new search facility. One of the most interesting features is that it prunes the list-just like Assembly Visualizer-but does so while maintaining a hierarchical organization! I find that very useful to maintain a better sense of where I am in the code. Leaving perhaps the most critical advantage to last, Reflector’s enhanced search is available both in the desktop edition (assembly browser) and in the VS Extension (object browser)! Thus if you want to look up some framework class, you no longer have to walk through the browser to find it-just type in a full or partial word in Reflector’s object browser, double-click the item, and view the source! Figure 8 Reflector’s new search facility in version 8, directly filtering the navigation tree in the assembly browser. Copy Code Reflector Desktop: On the face of it seems silly to include the ability to copy as a feature-but that depends what you expect from a copy operation, and different people have different expectations! So Reflector tries to provide flexibility so that you can copy to get raw code or copy to do documentation. The first point to realize is that you do not need to select-then-copy as traditionally done with a copy operation. Simply right-click in the code panel and select Copy As from the context menu. In the samples below I have added the horizontal rules top and bottom to more easily delineate the fragments. Choosing Text yields raw code: Choosing Rich Text yields a code sample useful for documentation: Choosing HTML gives you another variation: Note that you get the code for the entire item-if it is a class, for example, you get all the methods expanded whether you have manually expanded them or not (via the Expand Methods link at the bottom of the code window). More… This concludes an examination of the first group of Reflector features. Refer to the companion wallchart to see the full list, and continue with the details in parts 3 and 4.
https://www.simple-talk.com/dotnet/.net-tools/.net-reflector-through-the-looking-glass-the-meat-and-potatoes/
CC-MAIN-2017-04
refinedweb
3,201
53.1
Cocktail. Cocktail Sort and Bi-Directional Sorting Bubble sort typically passes through the unsorted list unidirectionally from left-to-right, bubbling the next highest item to the end of the list. Any turtles in the list move 1 position to the left as they slowly migrate to their rightful position at the beginning of the list. Cocktail sort eliminates turtles by sorting bidirectionally. The first pass in cocktail sort moves from left-to-right like bubble sort and moves the next highest item to the end of the list. The second pass moves in the opposite direction, from right-to-left, and moves the next lowest item (turtle) to the front of the list. By moving the turtles quickly to the front of the list on every other pass, cocktail sort hopes to improve the performance of bubble sort by reducing the overall number of passes. Let's illustrate this using the same unsorted list as comb sort: 6 0 9 3 7 5 4 1 8 2 On the first pass, cocktail sort will move from left-to-right and move the largest value, 9, to the end of the list like bubble sort. 6 0 9 3 7 5 4 1 8 2 -> 0 6 3 7 5 4 1 8 2 9 On the second pass, cocktail sort will move from right-to-left and move the smallest value, 1, to the front of the list. 0 6 3 7 5 4 1 8 2 9 -> 0 1 6 3 7 5 4 2 8 9 This bidirectional (cocktail shaker) movement will continue until no swaps occur in either direction. At this point the list is sorted. Cocktail Sort in Python Below is an implementation of cocktail sort in Python. Notice the bidirectional movement of the algorithm as it moves large values to the end of the list and small values to the beginning of the list in a repeated fashion until no swap occurs in either direction. def cocktail_sort(lst): """ Performs in-place Cocktail Sort. :param lst: lst of integers. :return: None >>> lst = [6, 0, 9, 3, 7, 5, 4, 1, 8, 2] >>> cocktail_sort(lst) >>> assert(lst == [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) """ if lst is None or len(lst) < 2: return length = len(lst) j = 0 swap_occurred = True while swap_occurred: swap_occurred = False # Move Left-to-Right for i in range(length - 1 - j): if lst[i] > lst[i+1]: lst[i], lst[i+1] = lst[i+1], lst[i] swap_occurred = True if not swap_occurred: return swap_occurred = False # Move Right-to-Left for i in range(length - 1 - j, 0, -1): if lst[i] < lst[i-1]: lst[i], lst[i-1] = lst[i-1], lst[i] swap_occurred = True j += 1 Conclusion By understanding cocktail sort you better understand bubble sort. From a computer science and algorithm perspective you can better understand how and why computer scientists have tried to improve the bubble sort algorithm. In reality, one wouldn't use cocktail sort for performance reasons, but cocktail sort, comb sort, and bubble sort have uses in the computer science classroom to help students think algorithmically.
https://www.koderdojo.com/blog/cocktail-sort
CC-MAIN-2022-21
refinedweb
524
59.67
Free Java Books Free Java Books Sams Teach Yourself Java 2 in 24 Hours As the author of computer books, I spend a lot..., and the accompanying Java Pet Store sample application, are part of the successful Java Free JSP download Books Free JSP download Books Java Servlets and JSP free download books... optimization. The Professional JSP Books The JDC Java Programming Books Java Programming Books  ... As the author of computer books, I spend a lot of time loitering in the computer..., Enterprise Edition. This book, and the accompanying Java Pet Store sample Str J2ME Books ; Free J2ME Books J2ME programming camp... a list of all the J2ME related books I know about. The computer industry is fast..., and sample code, this book provides the detailed roadmap needed to design Servlets Books ; Books : Java Servlet & JSP Cookbook... of lines of code, the Java Servlet and JSP Cookbook yields tips and techniques... leading free servlet/JSP engines- Apache Tomcat, the JSWDK, and the Java Web Server java/jsp code to download a video java/jsp code to download a video how can i download a video using jsp/servlet JSF Books . Books of Core java Server Faces... to Java developers working in J2SE with a JSP/Servlet engine like Tomcat, as well... to designate Java code that is invoked when forms are submitted Programming Books JSP Programming Books  ... server: Microsoft ASP, PHP3, Java servlets, and JavaServer Pages? (JSP[1... JSP is a key component of the Java 2 Platform Enterprise Edition (J2EE Sample\Practice project on JSP and Web services Sample\Practice project on JSP and Web services I wanted to implement\Practice a project using web services. where can I get these details Please visit the following link: Tomcat Books and Code Download Tomcat is an open source web server that processes JavaServer... a lot of trouble of making code iterations especially for pedagogical reasons. I do... the Java technology. It is a JSP and servlet container that can be attached to other JSP PDF books ; The free servlet and JSP Books Slides and exercises from Marty Hall's world...JSP PDF books Collection is jsp books in the pdf format EJB Books of useful sample code, this title is one of the best available guides to working... code as well as the XML descriptors needed to deploy each sample. (With EJB..., and then proceeds to show the actual code that does the job. His treatment of the Java Button - JSP-Servlet Download Button HI friends, This is my maiden question at this site. I am doing online banking project, in that i have "mini statement" link. And i want the Bank customers to DOWNLOAD the mini statement. Can any one XML Books ; Free Java XML Books... focuses on using Java to do XML development. Up until now many of the XML books I... Java XML Books   Project in jsp Project in jsp Hi, I'm doing MCA n have to do a project 'Attendance Consolidation' in JSP.I know basic java, but new to jsp. Is there any JSP source code available for reference...? pls help me MySQL Books Balling, finally arrived from Amazon. There?s very few tech books I can read from.... I usually use 'In A Nutshell' books the way one might use a dictionary or other... MySQL Books List of many project jsp project hi i am sabarish i am doing mini project in EJP..... front end is jsp and back end is SQL SERVER 2012 my project title is friendstouch where its looks like a social networking i need code for how to send mail to user... me code for that...its very urgent about project code - Java Beginners about project code Respected Sir/Mam, I need to develop an SMS... or Sample code which does this work? 7. Also suggest me which technology has... or programs. User interface: Web based. Technology: JSP ,servlets User interface  ... vision of a software project is dealing with one's coworkers and customers... colleagues for whom she is partially responsible. In this essay I code to upload and download a file - Java Beginners java code to upload and download a file Is their any java code to upload a file and download a file from databse, My requirement is how can i... and Download visit to : http Ajax Books Java+XML books under his belt and the Freemans are responsible for the excellent... Ajax Books AJAX - Asynchronous JavaScript and XML - some books and resource links These books : <... the project i build the path by adding external jars of ojdbc14.jar . But when i do... Indigo IDE and before deploying the project i build the path by adding external jars Java server Faces Books ; Store-Java Buy two books direct from O'Reilly and get the third free by using code OPC10 in our shopping cart. This deal includes books... Java server Faces Books   JSP code - JSP-Servlet JSP code Hi! Can somebody provide a line by line explanation of the following code. The code is used to upload and download an image. <... have successfully upload the file by the name of: Download /*Code Ada Books conventions, code will look like this and keywords will look like this. I will include... Ada Books  ... to Java and execution speeds similar to and sometimes exceeding C. gnat Need E-Books - JSP-Servlet Need E-Books Please i need a E-Book tutorial on PHP please kindly send this to me thanks How do i start to create Download Manager in Java - JSP-Servlet How do i start to create Download Manager in Java Can you tell me from where do i start to develop Download manager in java Sample program of JSP Sample program of JSP <%@ page language="java" contentType="text/html; charset=ISO-8859-1" pageEncoding="ISO-8859-1"%> <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" " File Upload And download JSP Urgent - JSP-Servlet Ragavendran, i am sending code of file download in jsp. Plz implement this code...File Upload And download JSP Urgent Respected Sir/Madam, I... Download in JSP.. In the Admin window, There must be "Upload" provision where admin Accessing database from JSP ; This will create a table "books_details" in database "books" JSP Code The following code contains html for user interface & the JSP backend...;Java I/O Tim Ritchey   Sample program of JSP Input program in JSP <%@ page language="java" contentType="text/html; charset=ISO-8859-1" pageEncoding="ISO-8859-1"%> <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" " jsp code jsp code i want health management system project code using jsp.its urgent java sample code - Java Beginners java sample code hai...... i need a full sample program fo inheritance......... Hi Friend, Please visit the following link: Thanks Download free java Download free java Hi, How to get Java for free? It is possible to download and install java on windows 64bit system? Thanks code - JSP-Servlet :// code How to write a java code for sending sms from internet. Hi friend, public class SMSClientDemo implements Runnable upload and download a file - Java Beginners upload and download a file how to upload a file into project folder in eclipse and how to download the same using jsp Hi Friend, Try the following code: 1)page.jsp: Display file upload form to the user Sample Code For ArrayLlist - Java Beginners Sample Code For ArrayLlist Hi Friend, I want sample code for adding single records and multiple records using ArrayList.Give sample code for Arraylist with realtime application. Hi I project - JSP-Servlet project i have to do a project in jsp... plz suggest me some good topics. its an mini project.. also mention some good projects in core java... reply urgently Hi friend, Please visit for more online examination system project in jsp online examination system project in jsp I am doing project in online examination system in java server pages. plz send me the simple source code in jsp for online examination system project guidance - JSP-Servlet project guidance i have to make a project on resume management... form, can anyone guide me through the project ? Hi maverick Here is the some free project available on our site u can visit and find the solution php download free php download free From where i download php free Upload and download file - JSP-Servlet Upload and download file What is JSP code to upload and download............ Now to download the word document file, try the following code...(); response.getOutputStream().flush(); %> Thanks I want to download a file...Hibernate sample code Hi Can any body tell me how to persist inner Java Certification Books ; Java Certification Books The book I followed... Java Certification Books  ... to several excellent resources for Java programmer certification and I have used most jsp jsp <p>hai sir, i have to send the data which i have in hrid table to data base and at the same time to the next form . i will give u the code.</p> <p>first form(project Manager.jsp) <%@ page library management system jsp project library management system jsp project i need a project of library management system using jsp/java bean XML Books XML Books We're pleased to provide free sample chapters... XML Books Processing XML with Java Welcome to Processing project query project query I am doing project in java using eclipse..My project is a web related one.In this how to set sms alert using Jsp code. pls help me Project source code Project source code i want source code for contact finder just like facebook provides or linkedin website on der home page just similar to dat...i m working on jsp project ..its a web based project which stores information need a Sample code - Development process need a Sample code i want run the python script in java code by using Runtime class and output should be formed like XML format.please help me. Advanced Thanks, Krishna File Download in jsp File Download in jsp file upload code is working can u plz provide me file download Struts Book - Popular Struts Books , the Jakarta-Tomcat JSP/servlet container, and much more. Struts Books... is a "hands-on" book filled with sample applications and code snippets you can reuse... Struts addresses this issue by combining Java Servlets, Java Server Pages (JSP Download file - JSP-Servlet Servlet download file from server I am looking for a Servlet download file example JSP Project JSP Project Register.html <html> <body > <form...; Process.jsp <%@ page language="java" %> <%@ page import="java.util.*"%> <%! %> <jsp:useBean id="formHandler" class="test.FormBean" scope project project how to code into jsp of forgot password Perl Programming Books -marked for this project, I will be able to contribute time to the project again.... When putting together the "Related Articles" code a couple of months ago, I ran... Perl Programming Books   project project sir i want a java major project in railway reservation plz help me and give a project source code with entire validation thank you Free PHP Books Free PHP Books  ...-strength enhancements in any project-no matter how large or complex. Their unique...-or migrating PHP 4 code-here are high-powered solutions you won't find anywhere else Sample java program Sample java program I want a sample program: to produce summary information on sales report. The program will input Data of Salesman Id, Item code, and number of cuestomer. Sales id is 5digit long, and items code range from
http://www.roseindia.net/tutorialhelp/comment/89816
CC-MAIN-2014-41
refinedweb
1,916
71.75
clock_gettime() Get the current time of a clock Synopsis: #include <time.h> int clock_gettime( clockid_t clock_id, struct timespec * tp ); Since: BlackBerry 10.0.0 Arguments: - clock_id - The ID of the clock whose time. - A clock ID returned by clock_getcpuclockid() or pthread_getcpuclockid(), representing the amount of time the process or thread has spent actually running. - tp - A pointer to a timespec structure where clock_gettime() can store the time. This function sets the members as follows: - tv_sec — the number of seconds since 1970. - tv_nsec — the number of nanoseconds expired in the current second. This value increases by some multiple of nanoseconds, based on the system clock's resolution. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The clock_gettime() function gets the current time of the clock specified by clock_id, and puts it into the buffer pointed to by tp. Returns: - 0 - Success. - -1 - An error occurred (errno is set). Errors: - EFAULT - A fault occurred trying to access the buffers provided. - EINVAL - Invalid clock_id. - ESRCH - The process associated with this request doesn't exist. Examples: /* *; } Classification: Last modified: 2014-06-24 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/c/clock_gettime.html
CC-MAIN-2015-18
refinedweb
207
60.82
DDE Tutorial: exploring DDE Introduction DDE Is a tool to make it easy to publish Debian information. In this tutorial we have a little tour to see how DDE works. If you just want to know a quick&dirty way to get at the data, just go to a DDE site (DDE has a list) and follow the instructions. Prerequisites You should install some Python packages : # Note: you can use python-syck if you do not have python-yaml apt-get install python-json python-yaml python-paste python-debianbts python-werkzeug python-cherrypy ... this should help get rid of messages like : "WARNING:static:Cannot find a way to decode the data in ..." Downloading the code git clone git://git.debian.org/debtags/dde.git cd dde Explore the information tree ./testdde --help / ./testdde ls -l /apt ./testdde get /apt/packages/apt ./testdde get -t json /apt/packages/apt (requires python-json) ./testdde get -t yaml /apt/packages/apt (requires python-yaml or python-syck) DDE exports data as a read-only tree that can be queried using paths. Given a tree path, you can get its value, list its child nodes or get some documentation about what it contains. Tree paths are provided by plugins loaded at program startup. It is quite easy to write new plugins that add new branches to the tree, exporting new information. When you get the value of a path, you can choose the format of the result. Currently supported are json, yaml, csv and pickle, but more can be added. Publish the information on the web Start DDE as a HTTP server (requires python-paste or python-cherrypy): ./testdde --server You can now point your browser at The DDE server shows an introduction about DDE, and allows you to navigate the tree. Once you find the information that you want, you can download it in a format of your choice: append ?t=json to a URL to download the data in JSON format append ?t=json&callback=name for JSONP append ?t=csv for tabular data in CSV format append ?t=pickle for Python pickled objects. From the DDE main page you also have links to further information about all the supported data formats. The internal web server is not the only way to publish information on the web: DDE is implemented as a WSGI application, and can be deployed as CGI, FastCGI, SCGI, mod_wsgi and pretty much any other way you can think of. Explore JavaScript integration Load the file ./complete.html into a browser while the DDE server is running and type the first few letters of a package name: after a short moment a popup should appear suggesting possible completions. Have a look at the source code to see how simple it can be to use DDE to add autocompletion to web forms. Write software that gets data from DDE This is how you get some data from DDE in python: import json, urllib2 def dde_get(url): return json.read(urllib2.urlopen(url+"?t=json").read()) print dde_get("") The same can be done with Yaml, or Pickle, or any other format. Other programming languages have it just as easy: you are welcome to add here examples in other languages. If you decode Pickle data, make the decoder safe by using cPickle and deactivating find_global. This is an example code that decodes a long stream of data from DDE: import urllib2 import cPickle as pickle def dde_get_stream(url): unpickler = pickle.Unpickler(urllib2.urlopen(url+"?t=pickle")) # Disallow arbitrary code execution unpickler.find_global = None while True: try: yield unpickler.load() except EOFError: break for package in dde_get_stream(""): print repr(package) Example in Perl: use strict; use warnings; use JSON; use LWP::Simple; use Data::Dumper; sub dde_get ($) { my ($url) = @_; my $json = get( $url . '?t=json' ); my $perl = from_json( $json, { utf8 => 1 } ); return $perl; } my $result = dde_get( ''); print Dumper $result; See also:
https://wiki.debian.org/DDE/Tutorial
CC-MAIN-2016-40
refinedweb
650
63.7
table of contents NAME¶ log10, log10f, log10l - base-10 logarithmic function SYNOPSIS¶ #include <math.h> double log10(double x); float log10f(float x); long double log10l(long double x); Link with -lm. log10f(), log10l(): || /* Since glibc 2.19: */ _DEFAULT_SOURCE || /* Glibc versions <= 2.19: */ _BSD_SOURCE || _SVID_SOURCE DESCRIPTION¶ These functions return the base 10 logarithm of x. RETURN VALUE¶ On success, these functions return the base 10 logarithm of x. For special cases, including where x is 0, 1, negative, infinity, or NaN, see log(3). ERRORS¶ See math_error(7) for information on how to determine whether an error has occurred when calling these functions. For a discussion of the errors that can occur for these functions, see log(3). ATTRIBUTES), clog10(3), exp10(3), log(3), log2(3), sqrt(3) COLOPHON¶ This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
https://manpages.debian.org/unstable/manpages-dev/log10l.3.en.html
CC-MAIN-2022-05
refinedweb
164
67.45
Phone Numbers Instead of URLs? 158 December writes "This story says Australian company Nascomms claims to be the first in the world to go online with numeric addressing [CT:TCP/IP uses numbers too, just not ones with area code ;)], in which telephone numbers are used in replace of the ubiquitous dot-com address. Interesting idea, but in the business case, I could much more easily guess then figuring out their phone number." Not the first? (Score:1) The 555-xxxx of ip's? (Score:1) Perhaps now movies will have to make sure they don't show a real IP address or hostname, like the 555 numbers on all the tv shows. Re:The 555-xxxx of ip's? (Score:1) All the joe wannabes would go home saying "Hey honey...look, I can ping that site from that movie." Re:*cough*IP address*cough* (Score:1) I think I'm going to go kill myself now. No, wait. I'll kill their VCs, then kill the patent office folks, *then* kill myself. If I'm going out, I might as well smite some evil along the way. *cough*IP address*cough* (Score:1) blah (Score:1) Big Brother locates evil citizen's SMTP & Server server. Big Brother points gun to server admins. Server admins give Big Brother evil citizen's e-mails. correct link + more (Score:1) Pah (Score:1) The problem with this, as with myriad other 'solutions', is that it assumes that anything is better than IP4 and DNS. 'Bollocks' I say. If people wanted numeric addresses IP4/6 is perfectly suitable; it's as easy to remember and IP address as it is a phone number. However, people don't want numbers; they want something they can remember. And if this is aimed at eliminating cybersquatting, what's going to happen when someone gets the phone number 7-11-7-11? How big a fight over 69-69-69 are we going to see among porn sites? To sum up: half-arsed doesn't even begin to describe this idea. Re:What a great idea! (Score:1) Names can be used to differentiate between (vrtual) webservers, while IP numbers can't. Re:What a great idea! (Score:1) Re:What a great idea! (Score:1) Growing incredulously... (Score:1) Silly Aussie. It's really too bad somebody with a great name like "Siobhan" said something so bizarre and stupid. Incredulous: Skeptical; disbelieving: incredulous of stories about flying saucers. Expressive of disbelief: an incredulous stare. So the figure is going to grow in such a manner that it can't believe itself. Wow. English is fun. Re:The useability of phone numbers (Score:1) It's not hard to use a cell phone, even someone else's, but it is hardly standard. names and numbers (Score:1) I agree that the proposition is completely backwards: we should be replacing phone numbers with urls, and not the ohter way around... how about l " "phone://voice.company.com/department/person.te ok. har har. Apparently they were thinking about portable phones and w@p services. Their point was that it is easier to tap numbers on a phone than words. which is true. but i think phones will evolve a bit in the next few microseconds to make such an idea unnecessary. IMHO, if you have screen realestate big enough to comfortably browse for information, there is a way to fit some kind of intelligent input system that would make it easy to type, at least an URL. T9 software is already pretty neat, and things will get better. if you are interested in typing efficiently in small spaces: T9 [t9.com] FITALY [twsolutions.com] BAT [worklink.net] FAQ [tifaq.org] so, i don't think alternative URL systems are necessary. rethinking cellphone input is, however. adrien cater boring.ch [boring.ch] Re:What a great idea! (Score:1) DNS not good enough anymore? (Score:1) There are some standard answers to this that we've heard a lot recently: 1) Get rid of top-level domains altogether Sure... But this won't make the battle for domains go away, right? Rather make it tougher. 2) Make use of higher-level domains more extensively Great idea, but we'll never convince corporations. If they come up with a great product/service/whatever they will want the domain-name for that as well as for their company, and a dozen more... 3) Make the top-level domains completely free Like alternative number one, right? Only shift the fight one step to the rigth. 4) And so on... What I would like to know is if someone is thinking of alternative ways to resolve names to addresses. No, I don't mean the alternative ("rouge") DNS:es, but completely new ways! Decentralized preferrably. Built around mechanisms similar to Freenet perhaps? Anyone? Oh great (Score:1) The whole idea is so ludicrously crap it's actually quite funny! It's really just another dereference on top of DNS, so where we now have DNS->IP, one of these addresses is NUMBER->DNS->IP. Now how pointless is that? Re:Phone numbers vs IP numbers (Score:1) Re:The 555-xxxx of ip's? (Score:1) Re:You're overlooking the obvious ... (Score:1) While many companies may have similar names, but dissimilar URL's, finding them online can be hard. If you have a brochure or manual with a service phone number (or any number, really), you just punch that in on the address line and viola! OK, and if you had the brochure/manual handy, what is stopping you from punching in the web address from it? This seems like a Dilbert cartoon! (Score:1) -- SecretAsianMan (54.5% Slashdot pure) Thats the whole point of using names (Score:1) Nitpicking for phreakers (Score:1) Re:maybe but 800-flowers.com... (Score:1) Some mobile phones allow you to do this with numbers in their internal phonebook. I had a friend who's office's voicemail system worked this way too. You would say "Victor Thompson" and it would figure out whose mailbox to use. He told me that some of his coworkers (with strangely spelled names) you had to guess a few times before you pronounced it the way it through. One guy nobody ever figured out how to say it right. (You could always fall back on some kind of numeric addressing scheme (like spell the name using the keypad, I'd guess)) Toyota's phone number. (Score:1) Guess they never heard of mnemonics (Score:1) There is a reason assembly language programmers prefer to use mnemonics instead of memorizing hex values (while memorizing the 6502 or 80386 opcodes is the hallmark of a true comp sci geek One will notice the phone numbers are broken down into small numbers that we can "chunk" quite nicely: 3 digits, 3 digits, 4 digits Most sane people don't memorize the first X digits of pi, we compress it down into one symbol. Same for any other "constant." Notice how we use url's to do the same thing. One domain name Re:Man, these ^%$# numbers are hard to remember! (Score:1) Better use IP addresses as phone numbers instead (Score:1) Seriously, using numbers instead of names is backwards and has been made outdated when DNS was invented long ago. urls as phone numbers (Score:1) I personally find this a pain, as it is easier for me to dial a number than the letters. OT: A little advice here... (Score:1) WTF? (Score:1) From the aricle: Web users just need to plug in the ISD and regional code, followed by the phone number of a company with a registered numeric address, and its Web page will be brought up. Oh, is that all? So I just have to figure out where the server is located that I'm connecting too, then I just have to find the phone number. Go to Qwestdex.com. Oh wait a sec, It was said before, but it needs to be restated. There are these numerical thingies called IP addresses. They're 32 bits long and every server on the internet has a unique one. Hmm... sounds like what they're proposing has already been done. It's a lot easier to find than to try and remember 64.208.32.100. I think they're trying to re-invent the wheel, but it's looking more like a Firestone "You'll die up there son, just like I did!" - Abe Simpson What? Are you Insane? (Score:1) Other Way Around (Score:1) Re:The 555-xxxx of ip's? (Score:1) --Ty Back to numbers?!? (Score:1) First we had phone numbers. Then we invented Vanity numbers because 1800-NUMBERS is just better to remember than 1800-686-2377. Then the WWW allowed us to use convienient names such as, and to make it even easier things like internet keywords came up because some URLs still were hard to remember. So for what reason should we ever step back to using numbers again? Any rational explanation? Hmm well, I'll surely enjoy people trying to remember my IPv6 address in a couple of years - come visit me at 12FB:8A6F:73E4:55E4:DEAD:C0DE ... maybe I'll make it sooo much easier to remember my site address by giving them ... sounds good... The idea got discussed on Slashdot... (Score:1) M Phone numbers for phones! Browsing w/ phones (Score:1) Patent? (Score:1) I would hope the patent offices would refuse this patent, based on the prior work of many online phone books where you can enter a phone number and get info on businesses, including their web page. This is far from a new idea. Re:What a great idea! (Score:1) Phone numbers are a step back from URLs (Score:1) The lack of semantic information in phone numbers has very real implications. For example, it's easy to distinguish between and because of the semantic information in each URL, but if the Sex Shoppe's phone number is 555-1234 and the phone number of Holy Angels Church is 555-1235, there's nothing in the phone number to tell you that the number you are about to dial to book your wedding, will, because of a single-digit transcription error, result in a rather interesting conversation with somebody whose expertise is likely to be far more useful during the honeymoon than during the ceremony. Prior art (Score:1) Works for me! It would put a quick stop to this "new area code" nonsense, too, given 26^10 possible "numbers" in the identifier space. /. just had a review of his Stranger In a Strange Land, it seems valid to bring it up here.) (Since Re:What a great idea! (Score:2) -- "Don't trolls get tired?" better url (Score:2) A better idea... (Score:2) maybe but 800-flowers.com... (Score:2) I have a theory that even phone numbers will go to the wayside in the 5-10 year range. Seriously, why should I have to remember Joe Blow's phone number? A good phone service would be a voice recognition which lets me say in effect that I want to talk to Joe Blow in Detroit. The system then would do the dirty work of resolving this into a phone number. Just as we think it is antiquated how our grandparents had to ask an operator to connect the wires to make a call, or use a party line, I believe remembering phone numbers will a thing of the past in the near future. I'm rambling just a little, but the point is that the future involves letting machines do what they do well... dealing with number. Let us humans do what we do well, interconnnecting concepts. Names have better hinges to concepts than numbers for most of us. The unusability of phones (Score:2) The book "The design of everyday things" talks a bit against this. Do you know how to transfer a call to another phone? Do you know how to do it if you are not in your company system? What are # and * for? The basic stuff can seem easy but it may get very hard. __ you can use letter for telephone number (Score:2) If you want to find good "name" for your telephone number, try [phonespell.org] it's pretty funny and works well. -- Region-specific addressing (Score:2) First of all, by the time I'd loaded the page, I'd already mentally composed the obligatory joke about how maybe this could be supplemented by a global directory system that would associate names with the numbers, so you wouldn't have to remember them; it could even be built into the applications, so you would just type the name and have it automatically look up the number... but I see it's been done too many times already. Your first point (Local businesses that just want to use their website to advertise a storefront rather than be an e-business) is interesting, though, in that this problem has been one of my biggest gripes all along. Most of us seem to agree that DNS is suffering under the "pollution" of the "com" TLD, which is even spilling over into "org" and "net". I think a large part of the problem is the fact that these global domains flatten everything into a single namespace, which is ironic since DNS' original solution to the namespace problem was to make it hierarchical. If a local business just wants to advertise in its area, why should it have to have a unique name in a flat global namespace? It then has to hope that its name has not been "taken" by a big company, a more-ambitious startup, another local business somewhere else, a porn site, etc. This leads to the crowdedness which has given us the "aridiculousnumberofwordsconcatenatedtogether.com Even among the startups, there are lots that are geographic-region-specific -- they even specifically advertise the fact that they concentrate their service in that area (supposedly making it better than the others whose efforts are spread so much thinner). And yet, they use the same global namespace, making their names even messier by mushing the region in. I've lived here all my life, but I'm not arrogant enough to think that the San Francisco Bay Area is the only area in the world near a bay who residents call it "the Bay Area", and yet we have "bayarea.com". I've always thought we should use the existing country and region domains for things that are region-specific. Then, "bayarea.com" could be "bayarea.ca.us", "bajobs.com" could be "jobs.sfba.ca.us", and a little mom-and-pop store in Berkeley could be "mom-and-pop.sfba.ca.us" or "mom-and-pop.berkeley.ca.us" without having to worry about collisions with "mom-and-pop.nyc.ny.us", etc. The argument is, of course, that people have been conditioned to think that "website" is synonymous with "something dot com", and would be afraid of anything with more than one dot. I don't know that that's true, though: people can understand phone numbers with area codes, postal addresses with ZIP codes, etc., and I think most would automatically recognize their region code as being analogous, so if local sites advertised with it, it would begin to seem natural. For some reason, though, they are not popular. What makes this phone-number thing interesting is that it puts a novel spin on the idea, which could poularize it, even though it's not really any better. David Gould What about area code splits? (Score:2) Probably not first (Score:2) Oh! Oh! I know, we could have name servers! (Score:2) Who thought this up? Was he pithed? uh (Score:2) I don't want a lot, I just want it all! Flame away, I have a hose! Re:WTF? (Score:2) On reading this I thought this was a damn silly idea, but at least with these numbers you stand a chance of dealing with a local company. However, its simpler just to enter in your browser and look search for the local shop. Re:Numbers? (Score:2) Yes, this is do-able, but it's not elegant, and it's not simple. I, for one, HATE that horrible tri-tone thing that you get when you misdial a number, particularly because I'm usually using a headset on my phones. If phone numbers were such a great system, why do we need a phone book? A computer provides a much richer user interface that a 12-key telephone. Why not use it? And people who are uncomfortable with computers aren't suddenly going to warm up to them because they can type in a telephone number. URLs aren't perfect, but they're a damn sight better than phone numbers. Any user who can't operate Yahoo or Google is unlikely to want to use the computer for much of anything anyhow. Nascomms is not the first in the world. (Score:2) Man, these ^%$# numbers are hard to remember! (Score:2) Maybe I could write a program to associate a name with the number.. You're overlooking the obvious ... (Score:2) It makes sense. No more close approximations of the company name, no more .com/.org/.net confusion, no more wondering if it's hyphenated or not. And since the phone number had to be unique by nature, you get the right place every time. Hell, even if it were only a re-direct to the regular URL, that would be something. Re:The useability of phone numbers (Score:2) Compare this to using a computer. We all hate being disconnected on the phone, but that almost never happens when you compare it to using a computer. I can talk on the phone, walk around, and do all sorts of other tasks at the same time without the phone's performance being affected. (My attention span, on the other hand...) The look and feel of phones may differ stylistically, but the procedure is always the same: pick up phone, dial numbers. Compare that with the ever-changing UI standards of computer OS's, and the navigation controls on web pages that often puzzle newbies. "Press 1 for function x, 2 for function y,..." may sound annoying, but (i) we all know how to do this and (ii) frequent users don't even need to listen to the prompts any more. Many Internet sites are realizing the ubiquity and relative reliability of the phone system. I can get my Yahoo! Mail by calling 1-800-MY-YAHOO. I can get weather forecasts from MIT by calling 1-888-573-TALK. Weather forecasts and a lot of other functions are available through TellMe (1-800-555-TELL). They're realizing that while the telephone isn't perfect, there is still a lot of functionality that it can carry out. Re:Region-specific addressing (Score:2) DNS is as hierarchical as people want to make it. Most of the problem is local businesses (especially in the USA) insisting of having second level The difference with phone numbers is that if a company wanted something like a +800 number they'd have to pay extra for it (and pay for their calls) even more than a national freephone numbers. Phones come as standard with geographic numbers appropriate to the place they are in, anything else is a chargable extra. Re:DNS not good enough anymore? (Score:2) Great idea, but we'll never convince corporations. If they come up with a great product/service/whatever they will want the domain-name for that as well as for their company, and a dozen more... The problem here is the registrars who don't know how to say "no" or "that's your nth domain so the price will be X*2^n" or even "proof that is a recognised trading name of your company please". Re:What about both? (Score:2) You'd first need to convince the ISC to update bind to include the '+' character in the valid character set for DNS. Also how do you cope with organisations which print their phone numbers in all sorts of strange ways. If you could rely upon the full number to be printed without spaces then it just might be workable. I met this guy... (Score:2) Totally mad. Quite a pleasant guy to share a beer with, but... What would 1800Flowers.com do? (Score:2) Then, they were (and are) 1800Flowers.com. Would they be 1-800-Flowers again? And would my numpad get phone-like letters applied to the keys? And would it be switched? But I digress. Great (Score:2) Ugh. alpha-numerics (Score:2) Re:This has been done before (Score:2) They use weird names for their nameservers that are like the ones you mentioned: Re:Numbers? (Score:2) The goal is to advance technology... not to regress to a bad system. Unfortunately, this is the goal of the Geek, not the goal of business. The goal of business is to make money. This is commonly forgotten by geeks, and hence people [fuckedcompany.com] point and laugh at the non-bisiness business model most web companies use. Lets consider the technological make-up of the world today: 1. We have the 3rd world. Yes, these people are untapped "web" resources, but the reality is that a TRS-80 is considered high-tech for some of them. Whole towns don't have power, running water, and/or phones. Do you think that these people really care about reverse lookup DNS tables? These people are off of the eBusiness radar. 2. On the other extreme We have the Uber-geeks. These people are all about making everybodies lives easier - as long as they hold the secret knowledge as to how everything works. Why pay for a phone call, when you can email them? Why email them when you can ICQ them? why ICQ them when you can use voice over ip for free? 3. Then there are people like my inlaws... Who have internet access and a slough of questions, but don't care to listen. They waited patiently for 3 months until I was around over thanksgiving to remove a stuck CD becuase they didn't feel comfortable with a paperclip. Anyway, they can enter in a URL from the TV screen, but when toyota doesn't say on their advertisement - they don't think to type it in. Some day they may figure it out - but I figure I'll have a few more trips out there before then... 4. People who aren't don't know anything about computers at all. There are actually a few people in business that still don't use a computer - and not all of them are auto mechanics. A lot of them are older, and very set in their ways. A phone number is a familiar item. They can punch it in and they know what they can expect to hear - someone from that company on the other end of the line. They can type it in on a computer, and amazingly it would take them to the website. Not only have you adapted current technology now to a familiar frame, but you have actively encouraged someone else to see your business model. This are the largest untapped but available customer base for online companies PTFMA. In addition, a telephone crossreference fixes many problems with domain squatters, two companies with similar names/different prodcuts, and provides most of america with an existing directory structure to find the company they are looking for (the Yellow Pages). Lastly, I personally prefer to shop locally when I can't get a better deal elsewhere. I could run through (617) business lines for the product I wanted. This would allow me to shop online - and have the convenience of doing so - but put the company close enough that if it broke, I could easily return it or exchange it. anyways... phone numbers aren't a bad system - just one you wouldn't think to use given the current direction of technology. I however, see where this could be useful - and hence, profitable. - Reminds me of Realnames (Score:2) Sounds similar to Bango (Score:2) (Score:2) Standard trademark laws would be in place. Coca-Cola probably has rights to be the only Coca-Cola, but Smith Consulting would need city information to help you whittle it down. Then, like a memory-dial phone, you would bookmark your most commonly visited sites and forget the number. It doesn't have to be a phone number. It just has to be a unique number, like, oh say, your IP address. The naming system sounds good until you try to pick a unique name and let your business rely on people spelling it right, or working out your messy attempt at a unique name. If you were Smith Consulting what would you use? smith-cons.com? What did you get the first time you tried to find Via, id, or Diamond? These are national companies. Now try to find a local carpentry service. Also, it can be embarrasing to make a messy URL when they are suppose to be so obvious. This has been done before (Score:2) Re:IETF have already done this (Score:2) Correct Link (Score:2) Useful... (Score:2) On the other hand, for those using Altavista, or Lycos, or what have you, or who don't know how to properly refine a query, could have more difficulty. This could be a real boon for those people, as now you can simply look them up in the yellow pages. Now whether or not we WANT those people to be able to use the Internet more easily is a question that goes beyond the scope of this post... Hmmm (Score:2) Devil's Advocate (Score:2) I hardly think it merits a patent, but it does offer advantages over "" and looks cleaner than "". My mom is not a Karma whore! Re:What a great idea! (Score:2) An IPv4 address is a sequence of 4 numbers (bytes), grouped into sets of one. Mike "I would kill everyone in this room for a drop of sweet beer." Numbers used in proposal for private .nl domains (Score:2) I'm still waiting for the first person to send me such a 'cool new domain name'. Most Dutch people who wanted a domain simply got a .org/.net instead or asked a friend at some company to register it for them. Re:better url (Score:2) Not really. Thd DNS system have been badly abused by the Internet rush. Those names were not supposed to be seen by humans, but were handles to specific resources. The real DNS was supposed to be something like Google, or Real Names. Their idea is not as dumb as I originally thought. Phone numbers are already a way to uniquely indentify a service. Those numbers are ubiquitous. With more and more cell phones getting internet access, the thing may make a bit of sense. You try to reach your friend, but don't find him. You can directly go to its website, or whatever. Or someone calls you. With caller-id, you can have more informations. Or you want to a professional. You get its number from a phonebook, and you can get to its web-site to check its hourly rates, or whatever. Sure, it is dumb because if you were on a computer, you have probably got its number via the web, and if he had a webpage, it would probably been indicated next to the number. All in all, it is not 100% stupid. Just 90%... Cheers, --fred Replacing phone numbers by URLs (Score:2) Bango.net (Score:2) Bango.net [bango.net] They were selling them off at some stupid rates for small numbers - and for some reason they've dished out really attractive numbers to local companies - e.g. 12345 is a regional newspaper. Great! (Score:2) Wow, amazing capital expense (Score:2) Your IP address can reveal your location! (Score:2) So remember this when you're browsing. The websites can calculate your physical position right down to a 2-mile radius. That's more than close enough for an ICBM! Big Brother is watching you, and he doesn't like what he sees... Considering the number of dead links... (Score:2) Phone numbers vs IP numbers (Score:2) Casting one function for another? (Score:2) Score: -1, Redundant. a number-based scheme might be good (Score:2) Separating the two concerns would give you a system where you have a two step resolution: human readable to location independent numbers, followed by location independent numbers to IP addresses. Such a split gives you a lot more flexibility on mapping the human readable names because you can change the mapping without having all the web pages that point to the affected hosts go bad. Of course, these location independent numbers should not be phone numbers, since phone numbers do, in fact, change. Numbers? (Score:3) The goal is to advance technology... not to regress to a bad system. The useability of phone numbers (Score:3) Corrected link: p hone_numbers_replace_urls__1.html [yahoo.com] I seem to recall an article involving the relative difficulty of getting to a web site as compared to dialing a telephone. At the time, "web tone" was a hot buzzword. Many companies were using it to describe what they saw as the ideal user experience for the web - it should work as easily as a telephone. Except that when you think about it, telephones are pretty damn hard to work. Buy a cheap US$20 phone in a department store. Plug it in. To dial, you have to lift the receiver, wait for the dial tone, then punch in this obscure sequence of ten (in the US, anyway) digits. If you don't know what they are, you have to look them up in a book, or call another number to ask someone. If you misdial, you run the risk of bothering some shmuck in his living room. Etc. The point of the article being, phones aren't as easy to use as everyone seems to give them credit for. We've just been using them since we were kids. Come to think of it - no kid I know who's been using the web for any period of time thinks it needs to be that much easier to use. And of course, this neglects an obvious question: what happens if you have to change your phone number? TPC.INT did it in 1993 (Score:3) The system, called tpc.int (which was only about the fifth or sixth Shortly after it was launched using the awkward backwards phone number with every digit separated by a dot syntax, someone (and his name escapes me for the moment) hacked up a special DNS zone to eliminate the extraneous dots and reverse the number. This system is still in use today at tpc.int [tpc.int], where you can already address tpc.int servers by phone number the same way you have for over seven years. If you've got some spare cycles and a lightly used phone line lying around, and unmetered local access, you should consider setting up a tpc.int server for your area. It's fun, and you'll learn a lot about MIME, mail processing, and neat DNS tricks in the process... New browser dialog boxes. (Score:3) --- Dammit... (Score:3) crap. ahh crap this is a net number... ---- It -Should- be the other way around... (Score:3) -- Correct link (Score:3) I can see it now... (screen goes wavy) (Score:5) Wife: Hon? It's still busy... Husband: *snicker* Keeeeep tryin'. Call again. Wife: (dialing aloud) 1-2-7, 0, 0, 1... damn! Busy! Husband: Dear, I gave you the number earlier. You're the one who wanted to eat at Chez Expensif. Why didn't you make reservations? Wife: I've been dialing ALL DAY!!! IETF have already done this (Score:5) What a great idea! (Score:5) Seriously, wouldn't the easiest way to accomplish this be to just turn off DNS --Ty Incredulous? (Score:5) "We expect that figure to grow incredulously over the next few months," Nacomms general manager Siobhan Dooley told ZDNet. I, for one, am certainly incredulous about the growth prospects.
https://slashdot.org/story/00/11/29/1411242/phone-numbers-instead-of-urls
CC-MAIN-2018-09
refinedweb
5,459
73.88
What is the best approach for making a VB 6 application communicate with my VC# .NET application? Can I set the focus to a specific input control in Page_Load without using JavaScript? Developing an online staff orientation site How can I get the MAC address of a client with ASP? Creating a generic search facility Questions regarding XML for a decision making tool Creating applications to access data via mobile phones Is it possible to schedule SQLs written for a paradox database? Is there a nifty way to get the country/origin of users accessing my Web site? How to enable a line-by-line code walkthrough Creating VB scripts that can delete files on the hard drive Problems using VB .NET with an Oracle database Enabling the storage of an audio file in a XML file How to display the cmd window when running C# programs Managing multiple languages How to set the ADODC connection to an Access datagrid What's the best approach for enumerating network resources? Changing a C++ Makefile project to a Utility project Using VS.NET to develop with ACT database No extensibility folders Changing a datagrid column to read-only=false based on variable How can I display an active image from webcam using Visual C++ code? Resizing controls on a form How can I configure Visual Studio.NET 2002 to use the .NET Framework 1.1? Problem displaying Web controls Extracting data from Word 6 to populate database Can I open and use the .NET command prompt in C#? If so, how? What does the PostBack method do? Managing C# concurrency problems: C# .NET database applications How can I make a connection to a Paradox password protected database? Sharing VS 2002 files with a friend with 2003 Mysterious error when updating a DataTable-based datagrid Accessing QB files with VB.NET Making VB.NET code executable I'm in the 'debugger user' group, but my program won't debug. Why? Converting VB6 adodb.recordset to VB.NET Running VS.NET 2002/2003 projects side by side in Crystal VS.NET compatibility issues with Small Business Server 2003 Random errors in e-mails from an ASP.NET application What is the best/simplest decision structure for checkboxes? The easiest way to send e-mails from VS.NET Sharing VB .NET objects without serialization methods Drag and drop in an ASP.NET treeview Error message in global.asax application Determining whether a user downloaded a file I have an Access Database that I want to display in a Listview. How do I run a DOS command in .NET and place the results in a text file with specified path? How can I add my own shell namespace extensions to the Windows Explorer? Accessing a control on the server side How would I dynamically load HTML controls from the SQL Server stored procedures? Is there any way to open the called child form in the same MDI form where the calling form resides?? Does it make sense to upgrade to Visual Studio 2003 .NET if I am still developing in COM? How can I make my Crystal Report deployable with Visual Basic .Net? How do I (or can I) use Visual Studio.NET to develop with my Access Database? How do I call a dll file from Visual Studio that is not a COM controller or ActiveX? Calling a .NET application from another .NET application Implementing a ComboBox control in ASP.NET What databases are supported in SQLXML? Deploying an ASP.NET application that needs special folder Is there an equivalent to applets in C#? Need an ASP.NET page with a text box and submit button to pull records Hiding columns with database ID data Opening the command prompt window programmatically Problem with Paradox Connection String How do I talk through the serial port using VB.NET? Printing code 39 on A4 paper with VB.NET How do I write multiple text lines to a rich text box? Where are the Samples.vs macros? Page display error for Dynamic Help links Entering a blank date in the DateTimePicker control Can my application run on a computer without VS.NET installed? When I try to install VS.NET, I get 'internal error 2753.' I'm looking for an application to send faxes from VB.NET. Creating a standalone executable without installing other programs? How can VS.NET recognize time between clicks? How can an ASP.NET Web app collect info and send it to another site via SSL using an HTTPS POST? Can a server wait for n connection by sockets and threads? Why GAC and adding a third-arty DLL to GAC? More
https://searchwindevelopment.techtarget.com/sitemap/Answer?indexNo=2
CC-MAIN-2019-39
refinedweb
778
68.26
NAMEgetsockname - get socket name SYNOPSIS #include <sys/socket.h> int getsockname(int s, struct sockaddr *name, socklen_t *namelen); DESCRIPTIONGetsockname returns the current name for the specified socket. The namelen parameter should be initialized to indicate the amount of space pointed to by name. On return it contains the actual size of the name returned (in bytes). RETURN VALUEOn success, zero is returned. On error, -1 is returned, and errno is set appropriately. ERRORS -. CONFORMING TOSVr4, 4.4BSD (the getsockname function call appeared in 4.2BSD). SVr4 documents additional ENOMEM and ENOSR error codes. NOTEThebind(2), socket(2) Important: Use the man command (% man) to see how a command is used on your particular computer. >> Linux/Unix Command Library
http://linux.about.com/library/cmd/blcmdl2_getsockname.htm
crawl-003
refinedweb
118
52.87
Is it possible to make a Connections in a QmlTimer component ? Hi, I'm trying to connect a signal from an Item to a Timer using Connections. My code fails when I try to run it with the following error PathToMyProject/main.qml:21 Cannot assign to non-existent default property If I replace the Timer component by an Item component everything works fine. Here is my code: import QtQuick 2.12 import QtQuick.Controls 2.5 import QtQuick.Window 2.2 import QtQml 2.12 ApplicationWindow{ id: root width: 500 height: 500 visible: true Item { id: itemId signal mySignal() } Timer { // Replacing the Timer component by an Item component does not trigger any error id: timerId property int nbr: 13 Connections // This is the line 21 { target: itemId onMySignal: timerId.nbr = 42; } } Column { Button { text: "emit signal"; onClicked: itemId.mySignal(); } Button { text: "display nbr"; onClicked: console.log(timerId.nbr); } } } It's like it is not possible to assign a Connections to a Timer component. Can someone explain me if I'm right and why ? Thank's a lot ^^ hi @Moisi said in Is it possible to make a Connections in a QmlTimer component ?: Cannot assign to non-existent default property Don't define the Connections inside the Timer. Timer { d: timerId property int nbr: 13 } Connections { target: itemId onMySignal: timerId.nbr = 42; } In this context you don't even need Connections Item { id: itemId signal mySignal() onMySignal : timerId.nbr = 42; } Hi, Thank you for your answer. Yes, this is a solution. ^^ But I'm still curious about the reason why I can not define the Connections inside the Timer. Oh, yes ! You're right ! Thank you ! ^^
https://forum.qt.io/topic/117812/is-it-possible-to-make-a-connections-in-a-qmltimer-component/3
CC-MAIN-2022-40
refinedweb
276
67.25
Java.io.PrintStream.print() Method Description The java.io.PrintStream.print() method prints a long integer. The string produced by String.valueOf(long) is translated into bytes according to the platform's default character encoding, and these bytes are written in exactly the manner of the write(int) method. Declaration Following is the declaration for java.io.PrintStream.print() method public void print(long l) Parameters l -- The long to be printed Return Value This method does not return a value. Exception NA Example The following example shows the usage of java.io.PrintStream.print() method. package com.tutorialspoint; import java.io.*; public class PrintStreamDemo { public static void main(String[] args) { long x = 50000000000l; try { // create printstream object PrintStream ps = new PrintStream(System.out); // print long ps.print(x); // flush the stream ps.flush(); } catch (Exception ex) { ex.printStackTrace(); } } } Let us compile and run the above program, this will produce the following result: 50000000000
http://www.tutorialspoint.com/java/io/printstream_print_long.htm
CC-MAIN-2014-15
refinedweb
154
53.07
[Date Index] [Thread Index] [Author Index] Re: Re: looping - To: mathgroup at smc.vnet.net - Subject: [mg106861] Re: [mg106766] Re: [mg106704] looping - From: DrMajorBob <btreat1 at austin.rr.com> - Date: Mon, 25 Jan 2010 05:07:38 -0500 (EST) - References: <201001210955.EAA16523@smc.vnet.net> - Reply-to: drmajorbob at yahoo.com Huh! That's an amazing little secret trick you have, there! I wonder, though, if it could go away in a new version. Bobby On Sun, 24 Jan 2010 04:11:07 -0600, Leonid Shifrin <lshifr at gmail.com> wrote: > Bobby, > > this is one of the nice (IMO) features of local variables in Module. By > default, they have the attribute Temporary, which means that they are > destroyed once the execution exits Module. However, if some global > symbols > refer to them (like the functions you've mentioned), they are not > destroyed, > but kept in a symbol table. I use this trick all the time - this allows > for > example to share local variables (and functions) between several > functions, > which enables us to implement something similar to classes in OOP (If I > remember corerctly, this idea has been fully exploited by Roman Maeder in > his implementation of OOP in Mathematica - classes.m package). > > Have a look at my post in this thread, if you will > > > > there I implement the <pair> data type using this idea. In this thread: > > > > I used this trick to implement traversals of nested directory structure > with > possible directory skips which can be set at run-time based on a skip > function provided by the user. > > One of the many other ways to use this which I find useful is to > propagate > exceptions of a given type without explicitly making the exception tag > global. One particular such use I discussed in the thread: > > > > Generally, this is a good option when you want to make an essentially > global variable available to a given set of functions but not to others - > you make it like > > Module[{youvar}, > > f1[args]:=(body-referring-to-yourvar) > > f2[args]:=(body-referring-to-yourvar) > > ... > ]; > > This is a cheap way to introduce namespaces without making a full-blown > package. This allows us to use some of the nice OOP-like stuff (such as > private variables / methods) without the need to employ a general OOP > machinery (that is, when we just need nice encapsulation but not so much > OO-style polymorphism / inheritance). As long as the user never uses > variables with a dollar sign in their names (which can possibly collide > with > those generated by Module), this should be safe enough. > > One use of this is to make functions with "memory", similar to static > local > variables in C functions - some of the function's variables remember > their > values in between function calls. For example, the following function > will > produce the next Fibonacci number on demand, and yet it will be as fast > as > the iterative loop implementation for generation of consecutive Fibonacci > numbers (since Module is invoked only once, when the function is > defined): > > In[1]:= Module[{prev, prevprev, this}, > reset[] := (prev = 1; prevprev = 1); > reset[]; > nextFib[] := (this = prev + prevprev; prevprev = prev; prev = this)]; > > > In[2]:= > reset[]; > Table[nextFib[], {1000}]; // Timing > > Out[3]= {0.01, Null} > > In some cases, this can also improve performance, since some of the local > variables in a function can be made "semi-global" by this trick which may > eliminate the need of Module invocation in each function call, and the > associated overhead: > > In[4]:= > Clear[f, f1]; > f[x_, y_, z_] := Module[{xl = x, yl = y, zl = z}, (xl + yl + zl)^2]; > > In[6]:= > Module[{xl, yl, zl}, > f1[x_, y_, z_] := (xl = x; yl = y; zl = z; (xl + yl + zl)^2)]; > > > In[8]:= test = RandomInteger[100, {10000, 3}]; > > In[9]:= f @@@ test; // Timing > > Out[9]= {0.43, Null} > > In[10]:= f1 @@@ test; // Timing > > Out[10]= {0.15, Null} > > > I generally find this technique very useful, and also I think that it has > not been fully exploited (at least I did not fully exploit it yet). For > example, you can do nice things when you couple it with Dynamic > functionality, since Dynamic happens to work also on these > Module-generated > variables. > > It may however have some garbage-collection issues (I discussed this in > the > first of the threads I mentioned above), since once you abandon such > local > variables/functions, they will not be automatically garbage-collected by > Mathematica and can soak up memory (I have been bitten by this a few > times). > Of course, this can be dealt with as well, it's just not automatic. > > Regards, > Leonid > > > > > > > On Sat, Jan 23, 2010 at 5:18 PM, DrMajorBob <btreat1 at austin.rr.com> > wrote: > >> Thanks! I've just noticed something I don't understand, however. >> >> How can displayPanel[] be used outside the makeScorePanel function, >> when it >> uses studentInfo, text, buttons, and class -- all of which are local to >> the >> Module?? >> >> Bobby >> >> >> On Sat, 23 Jan 2010 14:57:25 -0600, Leonid Shifrin <lshifr at gmail.com> >> wrote: >> >>: >>> >>> >>>>>> help >>>>>> >>>>>> >>>> >>>> >> >> -- >> DrMajorBob at yahoo.com >> -- DrMajorBob at yahoo.com
http://forums.wolfram.com/mathgroup/archive/2010/Jan/msg00801.html
CC-MAIN-2020-24
refinedweb
833
56.59
How to: Add a Control to the Toolbox The Silverlight Designer for Visual Studio makes it easy to use third-party controls in your Silverlight applications. This topic shows to add a control in a third-party assembly to the Toolbox and use it in a Silverlight application. To use a third-party Silverlight control in your application Open the XAML file for your project's main window in the Silverlight Designer. For example, you might open MainPage.xaml or UserControl1.xaml. In the Toolbox, select the tab where you want to add the control. Right-click the Toolbox and select Choose Items from the shortcut menu. The Choose Toolbox Items dialog box opens. The following illustration shows the Choose Toolbox Items dialog box. Click the Silverlight Components tab. In the list, locate the control you want to use. If the control you want to use does not appear in the list, click the Browse button. In the Open dialog box, navigate to the assembly that contains the control you want to use. Select the assembly and click Open. Any controls contained in the assembly appear in the Choose Toolbox Items dialog box. Add a check mark next to the control you want to add, and then click OK. The selected control appears at the bottom of the selected tab in the Toolbox. From the Toolbox, drag the control onto the design surface. The selected control appears on the design surface. Also, the control's assembly is added as a project reference, and an XML namespace mapping is added to the XAML file.
https://msdn.microsoft.com/en-us/library/ff462023(v=vs.95)
CC-MAIN-2018-05
refinedweb
263
66.94
Check out the last few weeks worth of blog posts here. One on .NET and Google's Social Graph API. Another on .NET connecting to OpenID. Another on JSON, another on JMS. When I started this blog, there were people out there who knew, they were certain, that .NET apps could not connect to systems built with non .NET technology, or could not connect to systems built with non-Microsoft technology. This was accepted as fact. I spoke with customers daily who accepted this view as the truth. It was looney. I spoke with these people, explained that .NET could connect to the system they had running in their data centers right then, and they did not believe me. The only way I could convince them of what I was saying, was to build the prototypes and proof-of-concept systems. Even then, people were incredulous. When I connected systems together, people looked at me like I had done something mystical. I remember telling people that the XML libraries in .NET supported XML namespaces, and they flatly did not believe me. How did it get that way? I think mostly because of the dear Scott McNealy and his stagecraft. Remember him? Founder of Sun Microsystems and longtime industry blowhard. He blasted .NET, repeatedly. Every chance he got. There was a brief period when he claimed that the XML that was produced by .NET apps was "proprietary." And the tech press ate this stuff up. It sold magazines. The kind of claims he made - they were so kooky to me, but the press accepted it. This was a billionaire CEO, someone with a ton of charisma, and when he said .NET produced "proprietary XML", nobody bothered to ask "what they hell does that mean?" They just printed it. And then people read it, and believed it. That was reality. So, off to work I went - building prototypes and proving interop was real, and that XML was just XML. Scott is now out of the picture, I don't know what he is doing now, I'm sure it's something beautiful - do I recall that he wanted to just enjoy his kids? Well, he's earned it. But the fact is, he is not talking, and he is certainly not talking about .NET as a dead-end lock-in technology for Windows-only shops. With McNealy's mouth no longer a factor in the industry, all the little mini-me's also evaporated. All the people that made a living just repeating McNealy's bon mots disappeared, I guess they are now making honest livings somewhere. The demagoguery in general has waned, and now customers believe what they see, rather than what they hear or read. Now when I say that .NET does JSON, nobody stands up and expresses disbelief. It's years now that people have been building .NET systems that connect to everything under the sun. Interop happens. People know it works now. It's this kind of thing that makes me start to think that a blog dedicated to .NET-Interop, ...ah...maybe it has lived past its usefulness? I mean, come on now. Does anyone doubt interop these days? Still, though, I find the novel combinations of systems to be intriguing and thought-provoking, so I'm going to continue writing about interop, exploring different angles, building proofs of concept, and so on. Cheers!
http://blogs.msdn.com/b/dotnetinterop/archive/2008/03/27/a-look-back-at-net-interop-history.aspx
CC-MAIN-2015-18
refinedweb
567
77.13
Curses Programming with Python¶ Abstract This document describes how to use the curses extension module to control text-mode displays. What is curses?¶ The curses library supplies a terminal-independent screen-painting and keyboard-handling facility for text-based terminals; such terminals include VT100s, the Linux console, and the simulated terminal provided by various programs. Display terminals support various control codes to perform common operations such as moving the cursor, scrolling the screen, and erasing areas. Different terminals use widely differing codes, and often have their own minor quirks. In a world of graphical displays, one might ask “why bother”? It’s true that character-cell display terminals are an obsolete technology, but there are niches in which being able to do fancy things with them are still valuable. One niche is on small-footprint or embedded Unixes that don’t run an X server. Another is tools such as OS installers and kernel configurators that may have to run before any graphical support is available. The curses library provides fairly basic functionality, providing the programmer with an abstraction of a display containing multiple non-overlapping windows of text. The contents of a window can be changed in various ways—adding text, erasing it, changing its appearance—and the curses library will figure out what control codes need to be sent to the terminal to produce the right output. curses doesn’t provide many user-interface concepts such as buttons, checkboxes, or dialogs; if you need such features, consider a user interface library such as Urwid.. The Windows version of Python doesn’t include the curses module. A ported version called UniCurses is available. You could also try the Console module written by Fredrik Lundh, which doesn’t use the same API as curses but provides cursor-addressable text output and full support for mouse and keyboard input. The Python curses module¶(), and mvwaddstr() into a single addstr() method. You’ll see this covered in more detail later.. Starting and ending a curses application¶(True) Terminating a curses application is much easier than starting one. You’ll need to call: curses.nocbreak() stdscr.keypad(False) echoed to the screen when you type them, for example, which makes using the shell difficult. In Python you can avoid these complications and make debugging much easier by importing the curses.wrapper() function and using it like this: from curses import wrapper def main(stdscr): # Clear screen stdscr.clear() # This raises ZeroDivisionError when i == 10. for i in range(0, 11): v = i-10 stdscr.addstr(i, 0, '10 divided by {} is {}'.format(v, 10/v)) stdscr.refresh() stdscr.getkey() wrapper(main) The wrapper() function takes a callable object and does the initializations described above, also initializing colors if color support is present. wrapper() then runs your provided callable. Once the callable returns, wrapper() will restore the original state of the terminal. The callable is called inside a try... except that catches exceptions, restores the state of the terminal, and then re-raises the exception. Therefore your terminal won’t be left in a funny state on exception and you’ll be able to read the exception’s message and traceback. Windows and Pads¶ Windows are the basic abstraction in curses. A window object represents a rectangular area of the screen, and supports) Note that the coordinate system used in curses is unusual. Coordinates are always passed in the order y,x, and the top-left corner of a window is coordinate (0,0). This breaks the normal convention for handling coordinates where the x coordinate comes first. This is an unfortunate difference from most other computer applications, but it’s been part of curses since it was first written, and it’s too late to change things now. Your application can determine the size of the screen by using the curses.LINES and curses.COLS variables to obtain the y and x sizes. Legal coordinates will then extend from (0,0) to (curses.LINES - 1, curses.COLS - 1). When you call a method to display or erase text, the effect doesn’t immediately show up on the display. Instead you must call the refresh() method of window objects to update the screen. This is because curses was originally written with slow 300-baud terminal connections in mind; with these terminals, minimizing the time required to redraw the screen was very important. Instead curses accumulates changes to the screen and displays them in the most efficient manner when you call refresh(). For example, if your program displays some text in a window and then clears the window, there’s no need to send the original text because they’re never visible. In practice, explicitly telling curses to redraw a window first calling stdscr.refresh() or the refresh() method of some other relevant window. A pad is a special case of a window; it can be larger than the actual display screen, and only a portion of the pad displayed at a time. Creating a pad requires the pad’s height and width, while refreshing a pad requires giving the coordinates of the on-screen area where a subsection of the pad will be displayed. pad = curses.newpad(100, 100) # These loops fill the pad with letters; addch() is # explained in the next section for y in range(0, 99): for x in range(0, 99): pad.addch(y,x, ord('a') + (x*x+y*y) % 26) # Displays a section of the pad in the middle of the screen. # (0,0) : coordinate of upper-left corner of pad area to display. # (5,5) : coordinate of upper-left corner of window area to be filled # with pad content. # (20, 75) : coordinate of lower-right corner of window area to be # : filled with pad content. update the screen and prevent annoying screen flicker as each part of the screen gets updated. refresh() actually does two things: - Calls the noutrefresh()method of each window to update an underlying data structure representing the desired state of the screen. - Calls the function doupdate()function to change the physical screen to match the desired state recorded in the data structure. Instead you can call noutrefresh() on a number of windows to update the data structure, and then call doupdate() to update the screen. Displaying Text¶() allows specifying both a window and a coordinate. Fortunately the Python interface hides all these details. stdscr is a window object like any other, and methods such as addstr() accept multiple argument forms. Usually there are four different forms. Attributes allow displaying text in highlighted forms such as boldface, underline, reverse code, or in color. They’ll be explained in more detail in the next subsection. The addstr() method takes a Python string or bytestring as the value to be displayed. The contents of bytestrings are sent to the terminal as-is. Strings are encoded to bytes using the value of the window’s encoding attribute; this defaults to the default system encoding as returned by locale.getpreferredencoding(). The addch() methods take a character, which can be either a string of length 1, a bytestring of length 1, or an integer. Constants are provided for extension characters; these constants are integers greater than 255. For example, ACS_PLMINUS is a +/- symbol, and ACS_ULCORNER is the upper left corner of a box (handy for drawing borders). You can also use the appropriate Unicode character.(False) to make it invisible. For compatibility with older curses versions, there’s a leaveok(bool) function that’s a synonym for curs_set(). When bool is true, the curses library will attempt to suppress the flashing cursor, and you won’t need to worry about leaving it in odd locations. Attributes and Color¶ Characters can be displayed in different ways. Status lines in a text-based application are commonly shown in reverse video, or. Colors are numbered, and. User Input¶ The C curses library offers only very simple input mechanisms. Python’s curses module adds a basic text-input widget. (Other libraries such as Urwid have more extensive collections of widgets.) There are two methods for getting input from a window: getch()refreshes the screen and then waits for the user to hit a key, displaying the key if echo()has been called earlier. You can optionally specify a coordinate to which the cursor should be moved before pausing. getkey()does the same thing but converts the integer to a string. Individual characters are returned as 1-character strings, and special keys such as function keys return longer strings containing a key name such as KEY_UPor ^G. It’s possible to not wait for the user using the nodelay() window method. After nodelay(True), getch() and getkey() for the window become non-blocking. To signal that no input is ready, getch() returns curses.ERR (a value of -1) and getkey() raises an exception.. The main loop of your program may look something like this: while True: c = stdscr.getch() if c == ord('p'): PrintDocument() elif c == ord('q'): break # Exit the while loop elif c == curses.KEY_HOME: x = y = 0 The curses.ascii module supplies ASCII class membership functions that take either integer or 1-character string arguments; these may be useful in writing more readable tests for such loops. curses.textpad module supplies a text box that supports an Emacs-like set of keybindings. Various methods of the Textbox class support editing with input validation and gathering the edit results either with or without trailing spaces. Here’s an example: import curses from curses.textpad import Textbox, rectangle def main(stdscr): stdscr.addstr(0, 0, "Enter IM message: (hit Ctrl-G to send)") editwin = curses.newwin(5,30, 2,1) rectangle(stdscr, 1,0, 1+5+1, 1+30+1) stdscr.refresh() box = Textbox(editwin) # Let the user edit until Ctrl-G is struck. box.edit() # Get resulting contents message = box.gather() See the library documentation on curses.textpad for more details. For More Information¶ This HOWTO doesn’t cover some advanced topics, such as reading the contents of the screen or capturing mouse events from an xterm instance, but the Python library page for the curses module is now reasonably complete. You should browse it next. If you’re in doubt about the detailed behavior of the curses functions,. Often this isn’t because they’re difficult to implement, but because no one has needed them yet. Also, Python doesn’t yet support the menu library associated with ncurses. Patches adding support for these would be welcome; see the Python Developer’s Guide to learn more about submitting patches to Python. - Writing Programs with NCURSES: a lengthy tutorial for C programmers. - The ncurses man page - The ncurses FAQ - “Use curses... don’t swear”: video of a PyCon 2013 talk on controlling terminals using curses or Urwid. - “Console Applications with Urwid”: video of a PyCon CA 2012 talk demonstrating some applications written using Urwid.
http://docs.activestate.com/activepython/3.6/python/howto/curses.html
CC-MAIN-2018-39
refinedweb
1,811
56.55
One of the most common phone calls that the support team gets for Windows PowerShell is “How do I use Task Scheduler to schedule Windows PowerShell scripts?”. As an administrator, you need to have full control over when scripts run in your environment. Perhaps you need run a script only during a one-off maintenance window or maybe you want to schedule some routine maintenance on a server so that it runs at non-peak times. Although it was possible to use Task Scheduler to invoke scripts in Windows PowerShell 2.0, it was not trivial. What’s more, you were responsible for writing code to store the detailed results of your script if you wanted to view them later. In Windows PowerShell 2.0, we introduced background jobs, which let you run commands asynchronously in the background. This allows you to get the prompt back and continue running commands at the command line while the background job runs. In keeping with our sacred vow to respect your investment in learning Windows PowerShell by reusing concepts, we reused jobs in many places with Windows PowerShell 3.0. This blog post introduces just one example of this: job scheduling. This feature allows administrators to schedule background jobs for execution at a later time or according to a particular schedule with a set of cmdlets right out of the box. One of the most valuable features of scheduled jobs in Windows PowerShell 3.0 is that we’ll even take care of storing the results and output of your job. Where to find the Job Scheduling cmdlets The Job Scheduling cmdlets are delivered in the PSScheduledJob module that is included in Windows PowerShell 3.0 Beta and in the Windows Management Framework 3.0 Beta. There are 16 cmdlets in the PSScheduledJob module that allow you to work with Scheduled Jobs, the triggers that cause them to run, and some more advanced configuration. To see the cmdlets, type: PS > Get-Command -Module PSScheduledJob | Sort-Object Noun, Verb Capability Name ModuleName ———- —- ———- Cmdlet Add-JobTrigger psscheduledjob Cmdlet Disable-JobTrigger psscheduledjob Cmdlet Enable-JobTrigger psscheduledjob Cmdlet Get-JobTrigger psscheduledjob Cmdlet New-JobTrigger psscheduledjob Cmdlet Remove-JobTrigger psscheduledjob Cmdlet Set-JobTrigger psscheduledjob Cmdlet Disable-ScheduledJob psscheduledjob Cmdlet Enable-ScheduledJob psscheduledjob Cmdlet Get-ScheduledJob psscheduledjob Cmdlet Register-ScheduledJob psscheduledjob Cmdlet Set-ScheduledJob psscheduledjob Cmdlet Unregister-ScheduledJob psscheduledjob Cmdlet Get-ScheduledJobOption psscheduledjob Cmdlet New-ScheduledJobOption psscheduledjob Cmdlet Set-ScheduledJobOption psscheduledjob Basics of the Job Scheduling Cmdlets: The JobTrigger cmdlets let you determine when scheduled jobs actually run. You can schedule jobs to execute one time (now or at a later date), daily, weekly, when certain users log on, or when the system first boots up. The ScheduledJob cmdlets allow you to create and configure scheduled jobs. You use these cmdlets to perform actions like registering, unregistering, enabling and disabling scheduled jobs on the computer. The ScheduledJobOption cmdlets allow you to specify advanced settings for scheduled jobs. With these cmdlets, you can configure many of the settings you’re already familiar with in Task Scheduler, such as idle conditions that must be met before the job starts. The default values for each scheduled job should be sufficient in most cases, but the flexibility is there. An example: Imagine that you are incredibly passionate about configuring your computers for optimal energy efficiency. You’ve written a few lines to take the output of Powercfg.exe /energy and extract only the items that represent the most severe infractions. You want to run this analysis every night for a certain period of time so you can understand which issues appear most frequently. With just a few lines, we can schedule our script to run on the server every night as part of an “EnergyAnalysisJob”. PS > $trigger = New-JobTrigger -Daily -At 3am PS > Register-ScheduledJob -Name EnergyAnalysisJob -Trigger $trigger -ScriptBlock { powercfg.exe -energy -xml -output C:\temp\energy.xml -duration 60 | Out-Null $EnergyReport = (get-content C:\temp\energy.xml) $namespace = @{ ns = “” } $xPath = “//ns:EnergyReport/ns:Troubleshooter/ns:AnalysisLog/ns:LogEntry[ns:Severity = ‘Error’]” $EnergyErrors = $EnergyReport | Select-Xml -XPath $xPath -Namespace $namespace $EnergyErrors.Node | select Name, Description } Id Name JobTriggers Command Enabled — —- ———– ——- ——- 1 EnergyAnalys… {1} … True If you’ve ever used the Start-Job cmdlet, the syntax of Register-ScheduledJob will look very familiar to you. You can view the jobs that you have scheduled at any time by using the Get-ScheduledJob cmdlet. Scheduled Jobs are stored on a per-user basis, so Get-Scheduled job will only show the scheduled jobs that are relevant to you. Windows PowerShell 3.0 keeps track of the results and output of scheduled jobs for you. If you come back to the server that runs the scheduled job a couple of days later, you can use the Job cmdlets (the same ones that you use to manage background jobs) to view and get the results of the EnergyAnalysisJob scheduled job. The nice thing about scheduled job results is that they are available even in different sessions! Since you are not always around to receive results when a scheduled job runs, we store them on disk so you can receive them at any time. Here’s an important “gotcha” that you should know. To get the results of scheduled jobs, you must import the PSScheduledJob module into the current session (Import-Module PSScheduledJob). Otherwise, the Job cmdlets, such as Get-Job and Receive-Job, do not return results for scheduled jobs. This is a bit of an exception to the module auto-loading behavior in Windows PowerShell 3.0. The reason is that Get-Job is in the Microsoft.PowerShell.Core snap-in, so using it doesn’t automatically import the PSScheduledJob module. Also, if Get-Job were to automatically import all of the modules that implement custom job types, over time, the output would be crowded with irrelevant job types and performance might be affected by importing dozens of modules that aren’t needed. PS > Import-Module PSScheduledJob With the PSScheduledJob module imported, Get-Job shows the results of “instances” of our scheduled jobs. PS > Get-Job Id Name State HasMoreData Location Command — —- —– ———– ——– ——- 2 EventCollect… Completed True localhost … 4 EnergyAnalys… Completed True localhost … 6 EnergyAnalys… Completed True localhost … Scheduled jobs have BeginTime and EndTime properties that tell when the job instance actually ran. PS > Get-Job EnergyAnalysis | Select-Object Name,BeginTime,EndTime Name BeginTime EndTime —- ———– ——— EnergyAnalysisJob 3/12/2012 3:00:01 AM 3/12/2012 3:01:12 AM EnergyAnalysisJob 3/13/2012 3:00:01 AM 3/13/2012 3:01:14 AM To get the results of a scheduled job instance, use the Receive-Job cmdlet. PS > Receive-Job -Name EnergyAnalysisJob # Results will vary based on your system’s configuration, but you will see records like: Name : USB Device not Entering Selective Suspend Description : This device did not enter the USB Selective Suspend state. Processor power management may be prevented when this USB device is not in the Selective Suspend state. Note that this issue will not prevent the system from sleeping. By default, Windows PowerShell keeps the results of the last 32 instances of each scheduled job. After 32 results are stored for a particular scheduled job, the oldest ones are overwritten by subsequent executions. To change the number of results saved for each scheduled job, use the MaxResultCount parameter of the Set-ScheduledJob cmdlet. PS > Get-ScheduledJob -Name EnergyAnalysisJob | Set-ScheduledJob -MaxResultCount 100 Lastly, if you want to start a scheduled job manually rather than waiting for its triggers, you can use the DefinitionName parameter of the Start-Job cmdlet. PS > Start-Job -DefinitionName EnergyAnalysisJob As you can see, working with scheduled jobs in Windows PowerShell 3.0 is a lot like working with regular background jobs, but with much more control. We hope you enjoy the flexibility and simplicity of this cool new feature. Travis Jones [MSFT] Program Manager – Windows PowerShell Microsoft Corporation Hello from 2015, schemas.microsoft.com/…/2007 no longer exists so for people landing here from searching on using PowerShell to perform scheduled Jobs, use something else in the ScriptBlock. An example would be Get-Process | Select -first 10 | Export-CSV -noTypeInformation -path c:temp10Processes.csv Good Information Hi, thanks for sharing, this is great! To get BeginTime and EndTime I have to use different object names, such as PSBeginTime and PSEndTime >>> Get-Job EnergyAnalysis | Select-Object Name,PSBeginTime,PSEndTime Very nice. Hello Jim – Jobs created with these cmdlets will show up in the regular Windows Task Scheduler under: Task Scheduler LibraryMicrosoftWindowsPowerShellScheduledJobs Thanks! Hello Anon – The results of scheduled jobs will always be saved, regardless of which modules are imported. Their results will only be returned from calls to Get-Job if the PSScheduledJob module is imported. I'm not sure if that changes your feedback any. Thanks So, are you creating a new Task Scheduler or will jobs that are created with these cmdlets show up in the regular Windows Task Scheduler under Computer Management? Thanks! I don't like the fact that if you don't import the PSScheduleJobs module, the results of scheduled jobs will not be saved. Imagine how many support calls this little detail will generate? Please make it sot that if the scheduled jobs commandlets are used to schedule a job are used then to automaically import the module. For help about Scheduled Jobs in Windows PowerShell 3.0, see technet.microsoft.com/…/hh847863.aspx. Hi Josheinstein – For this release we're going to be limited to the handful of triggers offered from the Task Scheduler engine. If you want to file the PowerShell eventing suggestion on Connect (connect.microsoft.com/powershell), other folks from the community will be able to vote on it and we will certainly keep track of it. Thanks! Thanks , Going to have to give this a try. Any chance of being able to use PowerShell eventing as a job trigger?
https://blogs.msdn.microsoft.com/powershell/2012/03/19/scheduling-background-jobs-in-windows-powershell-3-0/
CC-MAIN-2019-09
refinedweb
1,645
52.49
22 October 2012 05:31 [Source: ICIS news] (recasts paragraph 6 to reflect updated completion date) SINGAPORE (ICIS)--?xml:namespace> "The demand for LNG is already stronger than initially expected.… With the development of the terminal, Iswaran was speaking at the Singapore International Energy Week which will run from 22-25 October. Located on This capacity will increase to 6m tonnes/year by the end of next year when additional jetties and regasification facilities are completed, it said. The terminal’s third tank is targeted for completion by the end of next year, according to EMA. Meanwhile, "We don't have a specific timeline at the moment [to implement the futures market]… but clearly this is an initiative that we will like to move on as soon as possible," he said. Demand response will allow consumers to participate actively in the market by curtailing their demand in response to high prices, according to Iswaran. "This can moderate price spikes, lower energy costs, and generate system-wide savings," he said. "Large consumers in the industry space such as electronics and the petrochemicals and chemicals sector... for whom energy is a significant proportion of their business operating costs will stand to gain [from the demand response initiative]," Iswar
http://www.icis.com/Articles/2012/10/22/9605813/singapores-first-lng-terminal-to-start-operations-in-q2-13.html
CC-MAIN-2015-06
refinedweb
206
51.38
Parallel Processing Large File in Python Image by Author For parallel processing, we divide our task into sub-units. It increases the number of jobs processed by the program and reduces overall processing time. For example, if you are working with a large CSV file and you want to modify a single column. We will feed the data as an array to the function, and it will parallel process multiple values at once based on the number of available workers. These workers are based on the number of cores within your processor. Note: using parallel processing on a smaller dataset will not improve processing time. In this blog, we will learn how to reduce processing time on large files using multiprocessing, joblib, and tqdm Python packages. It is a simple tutorial that can apply to any file, database, image, video, and audio. Note: we are using the Kaggle notebook for the experiments. The processing time can vary from machine to machine. We will be using the US Accidents (2016 – 2021) dataset from Kaggle which consists of 2.8 million records and 47 columns. We will import multiprocessing, joblib, and tqdm for parallel processing, pandas for data ingestions, and re, nltk, and string for text processing. # Parallel Computing import multiprocessing as mp from joblib import Parallel, delayed from tqdm.notebook import tqdm # Data Ingestion import pandas as pd # Text Processing import re from nltk.corpus import stopwords import string Before we jump right in, let’s set n_workers by doubling cpu_count(). As you can see, we have 8 workers. n_workers = 2 * mp.cpu_count() print(f"{n_workers} workers are available") >>> 8 workers are available In the next step, we will ingest large CSV files using the pandas read_csv function. Then, print out the shape of the dataframe, the name of the columns, and the processing time. Note: Jupyter’s magic function `%%time` can display CPU times and wall time at the end of the process. %%time file_name="../input/us-accidents/US_Accidents_Dec21_updated.csv" df = pd.read_csv(file_name) print(f"Shape:{df.shape}\n\nColumn Names:\n{df.columns}\n") Output Shape:(2845342, 47) Column Names: Index(['ID', 'Severity', 'Start_Time', 'End_Time', 'Start_Lat', 'Start_Lng', 'End_Lat', 'End_Lng', 'Distance(mi)', 'Description', 'Number', 'Street', 'Side', 'City', 'County', 'State', 'Zipcode', 'Country', 'Timezone', 'Airport_Code', 'Weather_Timestamp', 'Temperature(F)', 'Wind_Chill(F)', 'Humidity(%)', 'Pressure(in)', 'Visibility(mi)', 'Wind_Direction', 'Wind_Speed(mph)', 'Precipitation(in)', 'Weather_Condition', 'Amenity', 'Bump', 'Crossing', 'Give_Way', 'Junction', 'No_Exit', 'Railway', 'Roundabout', 'Station', 'Stop', 'Traffic_Calming', 'Traffic_Signal', 'Turning_Loop', 'Sunrise_Sunset', 'Civil_Twilight', 'Nautical_Twilight', 'Astronomical_Twilight'], dtype="object") CPU times: user 33.9 s, sys: 3.93 s, total: 37.9 s Wall time: 46.9 s The clean_text is a straightforward function for processing and cleaning the text. We will get English stopwords using nltk.copus the use it to filter out stop words from the text line. After that, we will remove special characters and extra spaces from the sentence. It will be the baseline function to determine processing time for serial, parallel, and batch processing. def clean_text(text): # Remove stop words stops = stopwords.words("english") text = " ".join([word for word in text.split() if word not in stops]) # Remove Special Characters text = text.translate(str.maketrans('', '', string.punctuation)) # removing the extra spaces text = re.sub(' +',' ', text) return text For serial processing, we can use the pandas .apply() function, but if you want to see the progress bar, you need to activate tqdm for pandas and then use the .progress_apply() function. We are going to process the 2.8 million records and save the result back to the “Description” column column. %%time tqdm.pandas() df['Description'] = df['Description'].progress_apply(clean_text) Output It took 9 minutes and 5 seconds for the high-end processor to serial process 2.8 million rows. 100% 🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩 2845342/2845342 [09:05<00:00, 5724.25it/s] CPU times: user 8min 14s, sys: 53.6 s, total: 9min 7s Wall time: 9min 5s There are various ways to parallel process the file, and we are going to learn about all of them. The `multiprocessing` is a built-in python package that is commonly used for parallel processing large files. We will create a multiprocessing Pool with 8 workers and use the map function to initiate the process. To display progress bars, we are using tqdm. The map function consists of two sections. The first requires the function, and the second requires an argument or list of arguments. Learn more by reading documentation. %%time p = mp.Pool(n_workers) df['Description'] = p.map(clean_text,tqdm(df['Description'])) Output We have improved our processing time by almost 3X. The processing time dropped from 9 minutes 5 seconds to 3 minutes 51 seconds. 100% 🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩 2845342/2845342 [02:58<00:00, 135646.12it/s] CPU times: user 5.68 s, sys: 1.56 s, total: 7.23 s Wall time: 3min 51s We will now learn about another Python package to perform parallel processing. In this section, we will use joblib’s Parallel and delayed to replicate the map function. - The Parallel requires two arguments: n_jobs = 8 and backend = multiprocessing. - Then, we will add clean_text to the delayed function. - Create a loop to feed a single value at a time. The process below is quite generic, and you can modify your function and array according to your needs. I have used it to process thousands of audio and video files without any issue. Recommended: add exception handling using `try:` and `except:` def text_parallel_clean(array): result = Parallel(n_jobs=n_workers,backend="multiprocessing")( delayed(clean_text) (text) for text in tqdm(array) ) return result Add the “Description” column to text_parallel_clean(). %%time df['Description'] = text_parallel_clean(df['Description']) Output It took our function 13 seconds more than multiprocessing the Pool. Even then, Parallel is 4 minutes and 59 seconds faster than serial processing. 100% 🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩 2845342/2845342 [04:03<00:00, 10514.98it/s] CPU times: user 44.2 s, sys: 2.92 s, total: 47.1 s Wall time: 4min 4s There is a better way to process large files by splitting them into batches and processing them parallel. Let’s start by creating a batch function that will run a clean_function on a single batch of values. Batch Processing Function def proc_batch(batch): return [ clean_text(text) for text in batch ] Splitting the File into Batches The function below will split the file into multiple batches based on the number of workers. In our case, we get 8 batches. def batch_file(array,n_workers): file_len = len(array) batch_size = round(file_len / n_workers) batches = [ array[ix:ix+batch_size] for ix in tqdm(range(0, file_len, batch_size)) ] return batches batches = batch_file(df['Description'],n_workers) >>> 100% 🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩 8/8 [00:00<00:00, 280.01it/s] Running Parallel Batch Processing Finally, we will use Parallel and delayed to process batches. Note: To get a single array of values, we have to run list comprehension as shown below. %%time batch_output = Parallel(n_jobs=n_workers,backend="multiprocessing")( delayed(proc_batch) (batch) for batch in tqdm(batches) ) df['Description'] = [j for i in batch_output for j in i]
https://cmtech.live/2022/08/04/parallel-processing-large-file-in-python/
CC-MAIN-2022-40
refinedweb
1,152
58.69
Agent - Protocol engine for the QMF agent. More... #include <qmf/engine/Agent.h> Allocate an object-id for an object that will be managed by the application. Get the next application event from the agent engine. This method is called periodically so the agent can supply a heartbeat. Respond to a method request. Remove and discard one event from the head of the event queue. Remove and discard one message from the head of the transmit queue. Indicate the completion of a query. This is not used for SYNC_START requests. Raise an event into the QMF network. Configure the directory path for storing persistent data. Configure the directory path for files transferred over QMF.
http://qpid.apache.org/apis/0.12/cpp/html/a00006.html
CC-MAIN-2013-48
refinedweb
114
70.6
Difference between revisions of "Short IDE Delays" Revision as of 16:55, 14 November 2007 Contents Introduction This page describes the feature "short IDE delays". This is just a description of a technique for shortening the probing time used during certain IDE operations (I think primarily on startup). It was noted on a test machine that IDE initialization takes a significant percentage of the total bootup time. Almost all of this time is spent busywaiting in the routine ide_delay_50ms(). It is trivial to modify the value of the delay used in this routine. Reducing the duration of the delay in the ide_delay_50ms() routine provides a substantial reduction in the overall bootup time for the kernel on a typical desktop system. It also has potential for use in embedded systems where PCI-based IDE drives are used. In the patch shown here, the delay was modified from 50 milliseconds to 5 milliseconds. However, for particular hardware, it may be desirable to tune the delay to the lowest possible value. This change may only be appropriate for embedded hardware. In a desktop environment, a variety of legacy hardware may be encountered which may need these relatively long delays. Rationale In testing on one desktop system, IDE delays accounted for about 70% of the total kernel bootup time. These delays may not be needed for proper operation of the hardware for a particular consumer electronics product. Downloads Patch - Patch for 2.6.6: short-ide-delays.patch - Patch for 2.6.8-rc2: ide-delay-2.6.8-rc2.patch Following is what the patch looked like for 2.6.6. Recently, the ide_delay_50ms() routine was replaced with multiple independent calls to msleep(50), which makes more recent patches more difficult. diff -ruN linux-2.6.6.orig/drivers/ide/ide.c linux-2.6.6-kfi/drivers/ide/ide.c --- linux-2.6.6.orig/drivers/ide/ide.c 2004-05-09 19:32:26.000000000 -0700 +++ linux-2.6.6-kfi/drivers/ide/ide.c 2004-06-09 21:14:01.000000000 -0700 @@ -1401,10 +1401,10 @@ void ide_delay_50ms (void) { #ifndef CONFIG_BLK_DEV_IDECS - mdelay(50); + mdelay(5); #else __set_current_state(TASK_UNINTERRUPTIBLE); - schedule_timeout(1+HZ/20); + schedule_timeout(1+HZ/200); #endif /* CONFIG_BLK_DEV_IDECS */ } Utility programs None How To Use Apply the patch, compile the kernel, and measure the kernel boot time. Sample Results As an experiment, this code (located in the file drivers/ide/ide.c) was modified to only delay 5 milliseconds instead of 50 milliseconds. The machine used was an HP XW4100 Linux workstation system, with the following characteristics: - Pentium 4 HT processor, running at 3GHz - 512 MB RAM - Western Digital 40G hard drive on hda - Generic CDROM drive on hdc When a kernel with this change was run on the test machine, the total time for the ide_init() routine dropped from 3327 milliseconds to 339 milliseconds. The total time spent in all invocations of ide_delay_50ms() was reduced from 5471 milliseconds to 552 milliseconds. The overall bootup time was reduced accordingly, by about 5 seconds. The ide devices were successfully detected, and the devices operated without problem on the test machine. However, this configuration was not tested exhaustively. Status Todd Poynor posted a patch for this to LKML on July 30, 2004.See the discussion thread at: This patch was rejected by Alan Cox, Jeff Garzik and Mark Lord. See the thread mentioned above for details. The reasons amounted to: - the IDE driver is delicate and temperamental, changing it would break things - This ignored the fact that the submitted patch only changed things when a config option was enabled - the counterargument to this is that in this situation, the patch would not get much testing by mainline users - if users tried this out, they might have problems and they would then bug the IDE maintainers - Alan suggested having the option "taint" the kernel, to avoid bug reports - newer hardware (SATA) won't have these problems and the probes should work faster (in other words, if you ignore this problem it will go away) - the timeouts shouldn't be correlated to each other anyway. That is, these timeouts are used by different parts of the IDE code for different operations. The fact that they currently have the same timeout value is just happenstance, and shouldn't be locked-in with a single value definition. Alan Cox summarized things by saying: If you want to speed this up then the two bits that the initial proposal and Jeff have sensibly come up with are - Are we doing too many probes - Should we switch to proper reset polling For certain cases (PPC spin up) we actually have switched to doing drive spin up this way, I certainly have no objection to doing the rest of the boot optimisation by following the standards carefully. Future Work Here is a list of things that could be worked on for this feature: - make a config option for the value of the delay - try to verify that the change is safe to use - check to see if these probes are used during runtime (not just bootup time) - test as much hardware as possible - ask about the safety of this on LKML
http://elinux.org/index.php?title=Short_IDE_Delays&diff=4938&oldid=1585
CC-MAIN-2017-04
refinedweb
861
58.52
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Execute query when installing addon Is it possible to execute some queries when somedy install my module ? And is it possible to execute others queries when uninstalling the same module ? init_xml that you found is ok. For an action at installation time, you can also use "function" tag in any normal xml data file, thus calling a function from specified model, see example of "function" tag in the answer here. For an action at uninstall time (as well as at install time) may work following workaround: python [8.0 api]: from openerp import models, fields, api import logging _logger = logging.getLogger(__name__) class handle_install_uninstall(models.Model): _name = "handle.install.uninstall" name = fields.Char('Name') @api.model def create(self, vals): _logger.info("Installing...") # installation time code... # you can use self.env.cr or self._cr here... return super(handle_install_uninstall,self).create(vals) @api.multi def unlink(self): _logger.info('Uninstalling...") # uninstall time code... # you can use self.env.cr or self._cr here... return super(handle_install_uninstall,self).unlink() XML: <openerp> <data> <record id="handle_install_uninstall_rec" model="handle.install.uninstall"> <field name="name">Install - uninstall handler</field> </record> </data> </openerp> add the above XML file in __openerp__.py manifest as normal 'data' xml file. then record of this model will be created at installation time (consequently "create" method is called and your code executed), when you uninstall module, then record created this way will be deleted at uninstall time, so if this deletion does not bypass the Odoo ORM, then "unlink" will be called and your uninstallation time code will be executed. please make sure that you do not create more then one record of this object in the same database in order to have your code called only once. I never tried this, but normally it should work. Probably there should be better way to call function at uninstallation time, but I do not know such. All worked perfectly. When installing AND when uninstalling. Thank you very much. About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/execute-query-when-installing-addon-86598
CC-MAIN-2017-34
refinedweb
382
51.95
Content-type: text/html getprfient, getprfinam, putprfinam - Manipulate file control database entry (Enhanced Security) Security Library (libsecurity.a) #include <sys/types.h> #include <sys/security.h> #include <prot.h> struct pr_file *getprfient(void); struct pr_file *getprfinam( char *name); void setprfient(void); void endprfient(void); int putprfinam( char *name, struct pr_file *pr); Specifies a file control database entry name. Specifies a file control database entry structure. The getprfient() and getprfinam() functions each return a pointer to an object with the following structure containing the separated-out fields of a line in the file control database. Each line in the database contains a pr_file structure, declared in the prot.h header file as follows: struct f_field { char *fd_name; /* Holds full pathname */ ushort fd_uid; /* uid of owner */ ushort fd_gid; /* gid of group */ ushort fd_mode; /* permissions */ char fd_type[2]; /* file type (one of r,b,c,d,f,s) */ acle_t *fd_acl; /* access control list for file */ int fd_acllen; /* number of entries in fd_acl */ }; struct f_flag { unsigned short_acl:1, /* Is fd_acl set? */ }; struct pr_file { struct f_field ufld; struct f_flag uflg; }; The getprfient() function when first called returns a pointer to the first pr_file structure in the database; thereafter, it returns a pointer to the next pr_file structure in the database, so successive calls can be used to search the database. The getprfinam() function searches from the beginning of the database until a file name matching name is found, and returns a pointer to the particular structure in which it was found. If an end-of-file or an error is encountered on reading, these functions return a null pointer. A call to the setprfient() function has the effect of rewinding the file control database to allow repeated searches. The endprfient() function can be called to close the file control database when processing is complete. The putprfinam() function puts a new or replaced file control entry pr with key name into the database. If the uflg.fg_name field is a 0 (zero), the requested entry is deleted from the file control database. The putprfinam() function locks the database for all update operations, and performs a endprfient() after the update or failed attempt. The file control database stores a list of entries for security relevant-character file type indicator: r (regular), b (block-special), c (character-special), d (directory), f (FIFO), s (symbolic link), The fd_acl field references the internal representation of the file's access control list. The fd_acllen field is the number of entries on that list. Programs using these functions must be compiled with -lsecurity. The value returned by getprfinam() and getprfient() refers to a structure that is overwritten by calls to these functions. To retrieve an entry, modify it, and replace it in the database, you must copy the entry using structure assignment and supply the modified buffer to putprfinam(). The getprfient() and getprfinam() functions return null pointers on EOF or an error. The putprfinam() function returns a value of 0 (zero) if it cannot add or upgrade the entry. Description file of directories, devices, control, and commands modified for security. General security databases file. delim off
http://backdrift.org/man/tru64/man3/putprfinam.3.html
CC-MAIN-2017-09
refinedweb
514
53.21