text
stringlengths 0
27.6k
| python
int64 0
1
| DeepLearning or NLP
int64 0
1
| Other
int64 0
1
| Machine Learning
int64 0
1
| Mathematics
int64 0
1
| Trash
int64 0
1
|
|---|---|---|---|---|---|---|
I read on wikipedia :
"In general, for a radix r's complement encoding, with r the base (radix) of the number system, an integer part of m digits and fractional part of n digits, then the r's complement of a number 0 ≤ N < r^(m−1)−r^(−n) is determined by the formula:
N** = (r^m − N) mod (r^m) "
I don't understand that does the number of digits i.e. m depend on the radix r ?
For ex : If I want to find 100's complement of 97 then is m=2 or m=1 ?
For m=2, I get the answer as 9903
For m=1, I get the answer as 03
So should I take m=2 or m=1 ?
| 0
| 0
| 0
| 0
| 1
| 0
|
I am using a 3rd-party "rotator" object, which is providing a smooth, random rotation along the surface of a sphere. The rotator is used to control a camera (see rotator.h/c in source code for xscreensaver).
The output of the rotator is latitude and longitude.
What I want is for the camera to stay above the "equator" - thus limited to a hemisphere.
I'd rather not modify the rotator itself. So I could take the latitude output and use the absolute value of it. However, smooth rotator movement across the equator would not produce smooth camera motion: it would bounce.
I suppose I could scale output latitude from the rotator from its current range to my target range: e.g. f(lat) = (lat+1)/2 would map the (0, 1) range to (0.5, 1). I.e. map the whole "globe" to the northern hemisphere. The movement would still be smooth. But what would be intended as the "south pole" by the rotator would become the "equator" for my camera. Wouldn't that result in strange motion? Maybe discontinuities? I'm not sure.
Is there another way to map a sphere (latitude and longitude) to a hemisphere, smoothly?
Update:
Thanks for your attention and responses. A couple of people have asked for clarification on "smooth". I mean not jerky: a small change in velocity of the rotator should be mapped to a small change in velocity of the camera. If I just took the absolute value of the latitude, a zero change in velocity of the rotator as it crossed the equator would translate to an abrupt sign-flip of the velocity of the camera (a.k.a. a bounce).
IIRC this is equivalent to requiring that the first derivative of the velocity be continuous. Continuous second derivative might be nice but I don't think it's essential.
| 0
| 0
| 0
| 0
| 1
| 0
|
I'm making a line graph with PHP using imagecreate() and imageline(). Im' trying to figure out how to do the calculation to find the y-axis for each point on the graph.
Here are a few values that would go in the graph:
$values[jan] = .84215;
$values[feb] = 1.57294;
$values[mar] = 3.75429;
Here is an example of the line graph. The x-axis labels are positioned at the vertical middle of the x-axis lines. The gap between the x-axis lines is 25px.
How would you do the calculation to find the y-axis for the values in the array above?
5.00
4.75
4.50
4.25
4.00
3.75
3.50
3.25
3.00
2.75
2.50
2.25
2.00
1.75
1.50
1.25
1.00
0.75
0.50
0.25
0.00
Jan Feb Mar apr may jun jul aug sep oct nov dec
| 0
| 0
| 0
| 0
| 1
| 0
|
I have a somewhat large document and want to do stop-word elimination and stemming on the words of this document with Python. Does anyone know an of the shelf package for these?
If not a code which is fast enough for large documents is also welcome.
Thanks
| 0
| 1
| 0
| 0
| 0
| 0
|
This is a difficult one, I've been racking my brains but I just come up with a sensible name for these variables. Someone will probably instantly hit on one.
Take these example actions:
"throw bottle at wall"
"push John into door"
"attack Ogre with sword"
action thing at/with/on/in/to thing
I need a sensible name for the first "thing", and a sensible name for the second "thing" if I had to define a variable for each. So... like "Interactor" and "Interactee"... but a proper name.
I've been trying to explain this to a few people and I can't seem to get the concept across, so feel free to ask me to clarify.
| 0
| 1
| 0
| 0
| 0
| 0
|
I am preparing a task for computer vision class, which involves training a simple classifier after extracting features from images. Since machine learning is not the main topic here, I don't want students to implement a learning algirithm from scratch. So, I have to recommend them some reference implementations. I believe the decision tree classifier is suitable for that.
The problem is the variety of languages allowed for the class is quite large: C++, C#, Delphi. Also, I don't want students to spend a lot of time to any technical issues like linking a library. WEKA is great for Java. We also can use OpenCV with all the wrappers, but it is quite big and clumsy while I want something simple and sweet.
So, do you know any simple C++/C#/Delphi libraries for learning decision trees?
| 0
| 0
| 0
| 1
| 0
| 0
|
I have the following script:
#!/bin/sh
r=3
r=$((r+5))
echo r
However, I get this error:
Syntax error at line 3: $ unexpected.
I don't understand what I'm doing wrong. I'm following this online guide to the letter http://www.unixtutorial.org/2008/06/arithmetic-operations-in-unix-scripts/
| 0
| 0
| 0
| 0
| 1
| 0
|
Sometimes I see and have used the following variation for a fast divide in C++ with floating point numbers.
// orig loop
double y = 44100.0;
for(int i=0; i<10000; ++i) {
double z = x / y;
}
// alternative
double y = 44100;
double y_div = 1.0 / y;
for(int i=0; i<10000; ++i) {
double z = x * y_div;
}
But someone hinted recently that this might not be the most accurate way.
Any thoughts?
| 0
| 0
| 0
| 0
| 1
| 0
|
I need to represent these mathematical equations in code and to solve them:
2x = 3y
3y = 4z
2x + 3y + 4z = 1
Please advise.
| 0
| 0
| 0
| 0
| 1
| 0
|
By huge numbers, I mean if you took a gigabyte (instead of 4/8 bytes etc.) and tried to add/subtract/multiply/divide it by some other arbitrarily large (or small) number.
Adding and subtracting are rather easy (one k/m/byte at a time):
out_byteN = a_byteN + b_byteN + overflowBit
For every byte, thus I can add/subtract as I read the number from the disk and not risk running out of RAM.
For multiplying/dividing, simply do the above in a loop.
But what about taking the nth root of a HUGE number?
| 0
| 0
| 0
| 0
| 1
| 0
|
I need to find the most optimal combination of coins that makes up a certain dollar amount. So essentially, I want to use the least amount of coins to get there.
For example:
if a currency system has the coins: {13, 8, 1}, the greedy solution would make change for 24 as {13, 8, 1, 1, 1}, but the true optimal solution is {8, 8, 8}.
I am looking to write this in Javascript, but pseudocode is fine as I'm sure that would help more people.
| 0
| 0
| 0
| 0
| 1
| 0
|
why this code returns wrong value?
int i=Integer.MAX_VALUE+1;
long l=Integer.MAX_VALUE+1;
System.out.println(l);
System.out.println(i);
| 0
| 0
| 0
| 0
| 1
| 0
|
What is the property that makes an optimization problem unconstrained?
| 0
| 0
| 0
| 0
| 1
| 0
|
I have coordinates for 4 vectors defining a quad and another one for it's normal. I am trying to get the rotation of the quad. I get good results for rotation on X and Y just using the normal, but I got stuck getting the Z, since I've used just 1 vector.
Here's my basic test using Processing and toxiclibs(Vec3D and heading methods):
import toxi.geom.*;
Vec3D[] face = {new Vec3D(1.1920928955078125e-07, 0.0, 1.4142135381698608),new Vec3D(-1.4142134189605713, 0.0, 5.3644180297851562e-07),new Vec3D(-2.384185791015625e-07, 0.0, -1.4142135381698608),new Vec3D(1.4142136573791504, 0.0, 0.0),};
Vec3D n = new Vec3D(0.0, 1.0, 0.0);
print("xy: " + degrees(n.headingXY())+"\t");
print("xz: " + degrees(n.headingXZ())+"\t");
print("yz: " + degrees(n.headingYZ())+"
");
println("angleBetween x: " + degrees(n.angleBetween(Vec3D.X_AXIS)));
println("angleBetween y: " + degrees(n.angleBetween(Vec3D.X_AXIS)));
println("angleBetween z: " + degrees(n.angleBetween(Vec3D.X_AXIS)));
println("atan2 x: " + degrees(atan2(n.z,n.y)));
println("atan2 y: " + degrees(atan2(n.z,n.x)));
println("atan2 z: " + degrees(atan2(n.y,n.x)));
And here is the output:
xy: 90.0 xz: 0.0 yz: 90.0
angleBetween x: 90.0
angleBetween y: 90.0
angleBetween z: 90.0
atan2 x: 0.0
atan2 y: 0.0
atan2 z: 90.0
How can I get the rotation(around it's centre/normal) for Z of my quad ?
| 0
| 0
| 0
| 0
| 1
| 0
|
Just getting started with Lucene.Net. I indexed 100,000 rows using standard analyzer, ran some test queries, and noticed plural queries don't return results if the original term was singular. I understand snowball analyzer adds stemming support, which sounds nice. However, I'm wondering if there are any drawbacks to gong with snowball over standard? Am I losing anything by going with it? Are there any other analyzers out there to consider?
| 0
| 1
| 0
| 0
| 0
| 0
|
I just downloaded the openjdk source and came to the realization that nearly all of the java.lang.Math class was implemented in native c/c++ code. I was wondering if there were any implementations that were fully written in java.
| 0
| 0
| 0
| 0
| 1
| 0
|
The most efficient way to code powers of two is by bit shifting of integers.
1 << n gives me 2^n
However, if I have a number that is larger than the largest value allowed in an int or a long, what can I use to efficiently manipulate powers of 2?
(I need to be able to perform addition, multiplication, division and modulus operations on the number)
| 0
| 0
| 0
| 0
| 1
| 0
|
I have a set of documents, and I want to return a list of tuples where each tuple has the date of a given document and the number of times a given search term appears in that document. My code (below) works, but is slow, and I'm a n00b. Are there obvious ways to make this faster? Any help would be much appreciated, mostly so that I can learn better coding, but also so that I can get this project done faster!
def searchText(searchword):
counts = []
corpus_root = 'some_dir'
wordlists = PlaintextCorpusReader(corpus_root, '.*')
for id in wordlists.fileids():
date = id[4:12]
month = date[-4:-2]
day = date[-2:]
year = date[:4]
raw = wordlists.raw(id)
tokens = nltk.word_tokenize(raw)
text = nltk.Text(tokens)
count = text.count(searchword)
counts.append((month, day, year, count))
return counts
| 0
| 1
| 0
| 0
| 0
| 0
|
I'm having trouble solving a simple maths problem. My algebra skills are pretty embaressing.
I've programmed a volume slider to give me a decibel gain value.
db_gain=(x * (8 / 5)) - 90;
For the above I know what x is (the slider thumb position) and I use it to find the db_gain.
How can I switch this around so that given the db_gain I find x (the thumb position)
| 0
| 0
| 0
| 0
| 1
| 0
|
Hi
this is a factorial method but it prints 0 in the console please help me thanks
public class Demo {
public static void main(String[] args) {
Demo obj = new Demo();
System.out.println(obj.factorial(500));
}
public int factorial(int n) {
int fact = 1;
for (int i = 2; i <= n; i++) {
fact= fact*i;
}
return fact;
}
EDITED:will return Infinity!
public class Demo {
public static void main(String[] args) {
Demo obj = new Demo();
System.out.println(obj.factorial(500));
}
public double factorial(long n) {
double fact = 1;
for (int i = 2; i <= n; i++) {
fact= fact*i;
}
return fact;
}
}
| 0
| 0
| 0
| 0
| 1
| 0
|
I'm trying to calculate the absolute deviation of a vector online, that is, as each item in the vector is received, without using the entire vector. The absolute deviation is the sum of the absolute difference between each item in a vector and the mean:
I know that the variance of a vector can be calculated in such a manner. Variance is similar to absolute deviation, but each difference is squared:
The online algorithm for variance is as follows:
n = 0
mean = 0
M2 = 0
def calculate_online_variance(x):
n = n + 1
delta = x - mean
mean = mean + delta/n
M2 = M2 + delta*(x - mean) # This expression uses the new value of mean
variance_n = M2/n
return variance_n
Is there such an algorithm for calculating absolute deviance? I cannot formulate a recursive definition myself, but wiser heads may prevail!
| 0
| 0
| 0
| 0
| 1
| 0
|
Once again i need ur help, i have a file in japanese language and i want to convert that file into english using C++, since i dont think that i can use any API's of google in c++, so any general idea can prove helpful for me, Please suggest something.
Thanks a lot
Owais Masood
| 0
| 1
| 0
| 0
| 0
| 0
|
The function a = 2 ^ b can quickly be calculated for any b by doing a = 1 << b.
What about the other way round, getting the value of b for any given a? It should be relatively fast, so logs are out of the question. Anything that's not O(1) is also bad.
I'd be happy with can't be done too if its simply not possible to do without logs or a search type thing.
| 0
| 0
| 0
| 0
| 1
| 0
|
Given any two points on compass (Start range and End Range) to form a range. Example from 270(Start range) degrees to 45(End range)degrees and given another point say 7 , how can I work out if that point is between Start and End range ?
I'm trying to write some code to work out if the Wind (in the above point 3) is blowing from the sea or from the land , where the land is defind by Start range and End range .
Many Thanks
Andy
Update:11/10/2010 18:46BST
From @sth's solution the following seems to work for as expected.
#!/usr/bin/perl -w
sub isoffshore {
my ( $beachstart,$beachend,$wind) = @_;
if( $beachend < $beachstart) {
$beachend += 360;
}
if ($wind < $beachstart){
$wind += 360;
}
if ($wind <= $beachend){
print ("Wind is Onshore
");
return 0;
}else{
print ("Wind is Offshore
");
return 1;
}
}
isoffshore ("0","190","3"); #Should be onshore
isoffshore ("350","10","11"); #Should be offshore
isoffshore ("270","90","180");#Should be offshore
isoffshore ("90","240","0"); #Should be offshore
isoffshore ("270","90","180");#Should be offshore
isoffshore ("0","180","90"); #Should be onshore
isoffshore ("190","0","160"); #Should be offshore
isoffshore ("110","240","9"); #Should be offshore
isoffshore ("0","180","9"); #Should be onshore
isoffshore ("0","180","179"); #Should be onshore
Results
@localhost ~]$ ./offshore2.pl
Wind is Onshore
Wind is Offshore
Wind is Offshore
Wind is Offshore
Wind is Offshore
Wind is Onshore
Wind is Offshore
Wind is Offshore
Wind is Onshore
Wind is Onshore
| 0
| 0
| 0
| 0
| 1
| 0
|
Does a 3D vector differ from a 3D point tuple (x,y,z) in the context of 3D game mathematics?
If they are different, then how do I calculate a vector given a 3d point?
| 0
| 0
| 0
| 0
| 1
| 0
|
I have a very simple linear classification problem,which is to work out a linear classification problem for the following three classes in coordinates:
Class 1: points (0,1) (1,0)
Class 2: points (-1,0) (1,0)
Class 3: points (0,-1) (1,-1)
I manually used a random initial weight [ 1 0,0 1] (2*2 matrix) and a random initial bias
[1,1] by applying each iteration on the six samples,I finally get a classification which is X=-1 and Y=-1,so when x and Y are both >-1,it is class1;
if X<=-1 and Y>-1,it is class2;
if x >-1 and Y <=-1,it is class3.
After plotting this on the graph,I think it has some problems since the decision boundary cross samples in class2 and class3,I wonder if that is acceptable.By observing the graph,I would say the ideal classification would be x =-1/2 and y=1/2,but I really cannot get that result after calculation.
Please kindly share your thoughts with me,thanks in advance.
| 0
| 1
| 0
| 1
| 0
| 0
|
I need to do two things, first, find a given text which are the most used word and word sequences (limited to n).
Example:
Lorem *ipsum* dolor sit amet, consectetur adipiscing elit. Nunc auctor urna sed urna mattis nec interdum magna ullamcorper. Donec ut lorem eros, id rhoncus nisl. Praesent sodales lorem vitae sapien volutpat et accumsan lorem viverra. Proin lectus elit, cursus ut feugiat ut, porta sit amet leo. Cras est nisl, aliquet quis lobortis sit amet, viverra non erat. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Integer euismod scelerisque quam, et aliquet nibh dignissim at. Pellentesque ut elit neque. Etiam facilisis nisl eu mauris luctus in consequat libero volutpat. Pellentesque auctor, justo in suscipit mollis, erat justo sollicitudin ipsum, in cursus erat ipsum id turpis. In tincidunt hendrerit scelerisque.
(some words my have been omited, but it's an example).
I'd like to result with sit amet and not sit and amet
Any ideas on how to start?
Second, I need to wrap all the words or word sequences matched from a given list in a given file.
For this, I think to order the result by desceding length and then process each string in replace function, to avoid having sit amet wrapped if I have another sit word in my list.
Is it a good way to do?!
Thank you
| 0
| 1
| 0
| 0
| 0
| 0
|
i have NP hard problem. Let imagine I have found some polynomial algorithm that find ONLY one of many existing solutions of that problem, but at least one solution (if present in the probem). Is that algorithm considered as solution of NP=P question (if that algorithm transformed to mathematical proof)?
Thanks for answers
| 0
| 0
| 0
| 0
| 1
| 0
|
i'm tring to understand the Audio * things for iPhone
currently i'm reading: core audio overview
here i got a question:
from apple example codes:
- (void) calculateSizesFor: (Float64) seconds {
UInt32 maxPacketSize;
UInt32 propertySize = sizeof (maxPacketSize);
AudioFileGetProperty (
audioFileID,
kAudioFilePropertyPacketSizeUpperBound,
&propertySize,
&maxPacketSize
);
static const int maxBufferSize = 0x10000; // limit maximum size to 64K
static const int minBufferSize = 0x4000; // limit minimum size to 16K
if (audioFormat.mFramesPerPacket) {
Float64 numPacketsForTime =
audioFormat.mSampleRate / audioFormat.mFramesPerPacket * seconds;
[self setBufferByteSize: numPacketsForTime * maxPacketSize];
} else {
// if frames per packet is zero, then the codec doesn't know the
// relationship between packets and time. Return a default buffer size
[self setBufferByteSize:
maxBufferSize > maxPacketSize ? maxBufferSize : maxPacketSize];
}
// clamp buffer size to our specified range
if (bufferByteSize > maxBufferSize && bufferByteSize > maxPacketSize) {
[self setBufferByteSize: maxBufferSize];
} else {
if (bufferByteSize < minBufferSize) {
[self setBufferByteSize: minBufferSize];
}
}
[self setNumPacketsToRead: self.bufferByteSize / maxPacketSize];
}
i understood almost everything, but i just did not get WHY this:
Float64 numPacketsForTime = audioFormat.mSampleRate / audioFormat.mFramesPerPacket * seconds;
i thought something like this would work:
numPacketsForTime = seconds * packetsPerSecond
so, packetsPerSecond = audioFormat.mSampleRate / audioFormat.mFramesPerPacket ????
could you help me with the maths?
thanks!
| 0
| 0
| 0
| 0
| 1
| 0
|
Given two random integer generators one that generates between 1 and 7 and another that generates between 1 and 5, how do you make a random integer generator that generates between 1 and 13? I have tried solving this question in various ways but I have not been able to come up with a solution that generates numbers from 1 to 13 with equal or near equal probability.
| 0
| 0
| 0
| 0
| 1
| 0
|
I want to display like example 1e10 to 10^10 in JLabel, the ^10 become small size.
| 0
| 0
| 0
| 0
| 1
| 0
|
In the Communications of the ACM, August 2008 "Puzzled" column, Peter Winkler asked the following question:
On the table before us are 10 dots,
and in our pocket are 10 $1 coins.
Prove the coins can be placed on the
table (no two overlapping) in such a
way that all dots are covered. Figure
2 shows a valid placement of the coins
for this particular set of dots; they
are transparent so we can see them.
The three coins at the bottom are not
needed.
In the following issue, he presented his proof:
We had to show that any 10 dots on a
table can be covered by
non-overlapping $1 coins, in a problem
devised by Naoki Inaba and sent to me
by his friend, Hirokazu Iwasawa, both
puzzle mavens in Japan.
The key is to note that packing disks
arranged in a honeycomb pattern cover
more than 90% of the plane. But how do
we know they do? A disk of radius one
fits inside a regular hexagon made up
of six equilateral triangles of
altitude one. Since each such triangle
has area sqrt(3)/3, the hexagon
itself has area 2*sqrt(3); since the
hexagons tile the plane in a honeycomb
pattern, the disks, each with area π,
cover π /(2*sqrt(3)) ~ .9069 of the
plane's surface.
It follows that if the disks are
placed randomly on the plane, the
probability that any particular point
is covered is .9069. Therefore, if we
randomly place lots of $1 coins
(borrowed) on the table in a hexagonal
pattern, on average, 9.069 of our 10
points will be covered, meaning at
least some of the time all 10 will be
covered. (We need at most only 10
coins so give back the rest.)
What does it mean that the disks cover
90.69% of the infinite plane? The easiest way to answer is to say,
perhaps, that the percentage of any
large square covered by the disks
approaches this value as the square
expands. What is "random" about the
placement of the disks? One way to
think it through is to fix any packing
and any disk within it, then pick a
point uniformly at random from the
honeycomb hexagon containing the disk
and move the disk so its center is at
the chosen point.
I don't understand. Doesn't the probabilistic nature of this proof simply mean that in the majority of configurations, all 10 dots can be covered. Can't we still come up with a configuration involving 10 (or less) dots where one of the dots can't be covered?
| 0
| 0
| 0
| 0
| 1
| 0
|
I have been looking at various programming problems and algorithms in an effort to improve my programming and problem solving skills. But, I keep running into description like this one:
"Let A = [a1,a2,...,an] be a permutation of integers 1,2,...,n. A pair of indices (i,j), 1<=i<=j<=n, is an inversion of the permutation A if ai>aj. We are given integers n>0 and k>=0. What is the number of n-element permutations containing exactly k inversions?"
(SOURCE: http://www.spoj.pl/problems/PERMUT1/)
What kind of math do I need to study in order for this sort of problem description to make sense to me?
| 0
| 0
| 0
| 0
| 1
| 0
|
i has a int:
int f=1234;//or f=23 f=456 ...
i want to get:
float result=0.1234; // or 0.23 0.456 ...
dont useing:
float result = parseFloat ("0."+f);
what's best way to do?
thanks
| 0
| 0
| 0
| 0
| 1
| 0
|
I'm trying to do some "complex" math where I need to call upon some of JavaScript's Math properties to solve the quadratic equation. Does the following method work?
root = Math.pow(inputb,2) - 4 * inputa * inputc;
root1 = (-inputb + Math.sqrt(root))/2*inputa;
root2 = (-inputb - Math.sqrt(root))/2*inputa;
Does this look correct?
For some reason, I'm not seeing correct results..
inputa, inputb, and inputc are all variables which store user-input from a text field by the way.
FULL CODE
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>Quadratic Root Finder</title>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script>
<script>
window.onload = function() {
$('.error').hide();
document.getElementById('quadraticcalculate').onclick = function calculateQuad()
{
var inputa = document.getElementById('variablea').value;
var inputb = document.getElementById('variableb').value;
var inputc = document.getElementById('variablec').value;
inputa = new Number(inputa); // try to convert to number
if (isNaN(inputa)) { // use built in method to check for NaN
$('#quadraticaerror').show();
return;
}
inputb = new Number(inputb); // try to convert to number
if (isNaN(inputb)) { // use built in method to check for NaN
$('#quadraticberror').show();
return;
}
inputc = new Number(inputc); // try to convert to number
if (isNaN(inputc)) { // use built in method to check for NaN
$('#quadraticcerror').show();
return;
}
root = Math.pow(inputb,2) - 4 * inputa * inputc;
root1 = (-inputb + Math.sqrt(root))/(2*inputa);
root2 = (-inputb - Math.sqrt(root))/(2*inputa);
document.getElementById('root1').value = root1;
document.getElementById('root2').value = root2;
if(root<'0')
{
document.getElementById('root1').value = 'No real solution'
document.getElementById('root2').value = 'No real solution'
}
else {
if(root=='0')
{
document.getElementById('root1').value = root1
document.getElementById('root2').value = 'No Second Answer'
}
else {
document.getElementById('root1').value = root1
document.getElementById('root2').value = root1
}
}
};
document.getElementById('quadraticerase').onclick = function()
{
document.getElementById('quadraticform').reset();
$('.error').hide();
}
document.getElementById('cubicerase').onclick = function()
{
document.getElementById('cubicform').reset();
$('.error').hide();
}
}
</script>
<style>
div.#wrapper
{
text-align: center;
}
.error
{
color: #FF0000;
}</style>
</head>
<body>
<div id="wrapper">
<div id="quadratic">
<form id="quadraticform">
<h1>Quadratic</h1>
a:<input id="variablea" value="" type="text">
<br/>
b:<input id="variableb" value="" type="text">
<br />
c:<input id="variablec" value="" type="text">
<br />
<input id="quadraticcalculate" value="Calculate!" type="button">
<input id="quadraticerase" value="Clear" type="button">
<br />
<br />
Roots:
<br />
<input id="root1" type="text" readonly>
<br />
<input id="root2" type="text" readonly>
<p id="quadraticaerror" class="error">Error: Variable a is not a valid integer!</p>
<br />
<p id="quadraticberror" class="error">Error: Variable b is not a valid integer!</p>
<br />
<p id="quadraticcerror" class="error">Error: Variable c is not a valid integer!</p>
</form>
</div>
<div id="cubic">
<form id="cubicform">
<h1>Cubic</h1>
a:<input id="variablea" value="" type="text">
<br/>
b:<input id="variableb" value="" type="text">
<br />
c:<input id="variablec" value="" type="text">
<br />
d:<input id="variabled" value="" type="text">
<br />
<input id="cubiccalculate" value="Calculate!" type="button">
<input id="cubicerase" value="Clear" type="button">
<br />
<br />
Roots:
<br />
<input id="root1" type="text" readonly>
<br />
<input id="root2" type="text" readonly>
<p id="cubicaerror" class="error">Error: Variable a is not a valid integer!</p>
<br />
<p id="cubicberror" class="error">Error: Variable b is not a valid integer!</p>
<br />
<p id="cubiccerror" class="error">Error: Variable c is not a valid integer!</p>
<br />
<p id="cubicderror" class="error">Error: Variable d is not a valid integer!</p>
</form>
</div>
</div>
</body>
</html>
| 0
| 0
| 0
| 0
| 1
| 0
|
Lets suppose I am trying to analyze an algorithm and all I can do is run it with different inputs. I can construct a set of points (x,y) as (sample size, run time).
I would like to dynamically categorize the algorithm into a complexity class (linear, quadratic, exponential, logarithmic, etc..)
Ideally I could give an equation that more or less approximates the behavior.
I am just not sure what the best way to do this is.
For any degree polynomial I can create regression curves and come up with some measure of fitness, but I don't really have a clue how I would do that for any nonpolynomial function. It is harder since I don't have any previous knowledge of what shape I should try to fit.
This may be more of a math question than a programming question, but it is very interesting to me. I'm not a mathematician, so there may be a simpler established method to get a reasonable function from a set of points that I just don't know about. Does anyone have any ideas for solving a problem like this? Is there a numerical library for C# that could help me crunch the numbers?
| 0
| 0
| 0
| 0
| 1
| 0
|
I am trying to write an x86 emulator in JavaScript for education purposes.
I have already written a compiler and at the moment I am trying to write the x86 emulator in JavaScript.
I have however a problem with the DIV instruction. According to http://siyobik.info/index.php?module=x86&id=72 DIV takes a 64bit number as input by interpreting EDX as the higher 32bit and EAX as the lower 32bit. It then divides this number by the DIV parameter and puts the result (if it is not greater than 0xFFFFFFFF) into EAX and the remainder into EDX.
As JavaScript does not support 64bit integer I need to apply some tricks. But so far I did not come up with something useful.
Does someone here have an idea how to implement that correctly with 32bit arithmetic?
| 0
| 0
| 0
| 0
| 1
| 0
|
I'm currently in the process of writing a function to find an "exact" bounding-sphere for a set of points in 3D space. I think I have a decent understanding of the process so far, but I've gotten stuck.
Here's what I'm working with:
A) Points in 3D space
B) 3x3 covariance matrix stored in a 4x4 matrix class (referenced by cells m0,m1,m2,m3,m4,ect; instead of rows and cols)
I've found the 3 eigenvalues for the covariance matrix of the points, and I've set up a function to convert a matrix to reduced row echelon form (rref) via Gaussian elimination.
I've tested both of those functions against figures in examples I've found online, and they appear to be working correctly.
The next step is to find the eigenvectors using the equation:
(M - λ*I)*V
... where M is the covariance matrix, λ is one of the eigenvalues, I is the identity matrix, and V is the eigenvector.
However, I don't seem to be constructing the 4x3 matrix correctly before rref'ing it, as the far right column where the eigenvector components should be calculated are 0 before and after running rref. I understand why they are zero after (without any constants, the simplest solution to a linear system of equations is all coefficients of zero), but I'm at a loss as to what to put there.
Here's the function so far:
Vect eigenVector(const Matrix &M, const float eval) {
Matrix A = Matrix(M);
A -= Matrix(IDENTITY)*eval;
A.rref();
return Vect(A[m3],A[m7],A[m11]);
}
The 3x3 covariance matrix is passed as M, and the eigenvalue as eval. Matrix(IDENTITY) returns an identity matrix. m3,m7, and m11 correspond to the far-right column of a 4x3 matrix.
Here's the example 3x3 matrix (stored in a 4x4 matrix class) I'm using to test the functions:
Matrix(1.5f, 0.5f, 0.75f, 0,
0.5f, 0.5f, 0.25f, 0,
0.75f, 0.25f, 0.5f, 0,
0, 0, 0, 0);
I'm correctly (?) getting the eigenvalues of 2.097, 0.3055, 0.09756 from my other function.
eigenVector() above correctly subtracts the passed eigenvalue from the diagonal (0,0 1,1 2,2)
Matrix A after rref():
[(1, 0, 0, -0),
(-0, 1, 0, -0),
(-0, -0, 1, -0),
(0, 0, 0, -2.09694)]
For the rref() function, I'm using a translated python function found here:
http://elonen.iki.fi/code/misc-notes/python-gaussj/index.html
What should the matrix I pass to rref() look like to get an eigenvector out?
Thanks
| 0
| 0
| 0
| 0
| 1
| 0
|
I was reading somewhere that:
The smallest integer larger than lg N
is the number of bits required to
represent N in binary, in the same way
that the smallest integer larger than
log10 N is the number of digits
required to represent N in decimal.
The Java statement
for (lgN = 0; N > 0; lgN++, N /= 2) ;
is a simple way to compute the
smallest integer larger than lg N
I maybe missing something here but how does the Java statement calculate the smallest integer larger than lg N?
| 0
| 0
| 0
| 0
| 1
| 0
|
Here is my requirement. I want to tokenize and tag a paragraph in such a way that it allows me to achieve following stuffs.
Should identify date and time in the paragraph and Tag them as DATE and TIME
Should identify known phrases in the paragraph and Tag them as CUSTOM
And rest content should be tokenized should be tokenized by the default nltk's word_tokenize and pos_tag functions?
For example, following sentense
"They all like to go there on 5th November 2010, but I am not interested."
should be tagged and tokenized as follows in case of that custom phrase is "I am not interested".
[('They', 'PRP'), ('all', 'VBP'), ('like', 'IN'), ('to', 'TO'), ('go', 'VB'),
('there', 'RB'), ('on', 'IN'), ('5th November 2010', 'DATE'), (',', ','),
('but', 'CC'), ('I am not interested', 'CUSTOM'), ('.', '.')]
Any suggestions would be useful.
| 0
| 1
| 0
| 0
| 0
| 0
|
I'm trying to solve the following real-life problem you might have encountered yourselves:
You had dinner with some friends and you all agreed to split the bill evenly. Except that when the bill finally arrives, you find out not everyone has enough cash on them (if any, cheap bastards).
So, some of you pays more than others... Afterwards you come home and try to decide "who owes who what amount?".
This, I'm trying to solve algorithmically & fair :)
it seems so easy at first, but I'm getting stuck with rounding and what not, I feel like a total loser ;)
Any ideas on how to tackle this?
EDIT: Some python code to show my confusion
>>> amounts_paid = [100, 25, 30]
>>> total = sum(amounts_paid)
>>> correct_amount = total / float(len(amounts_paid))
>>> correct_amount
51.666666666666664
>>> diffs = [amnt-correct_amount for amnt in amounts_paid]
>>> diffs
[48.333333333333336, -26.666666666666664, -21.666666666666664]
>>> sum(diffs)
7.1054273576010019e-015
Theoratically, the sum of the differences should be zero, right?
for another example it works :)
>>> amounts_paid = [100, 50, 150]
>>> total = sum(amounts_paid)
>>> correct_amount = total / float(len(amounts_paid))
>>> correct_amount
100.0
>>> diffs = [amnt-correct_amount for amnt in amounts_paid]
>>> diffs
[0.0, -50.0, 50.0]
>>> sum(diffs)
0.0
| 0
| 0
| 0
| 0
| 1
| 0
|
I saw somewhere that if we have a one-to-one function from sets X to Y mean that we have a onto function from Y to X. I can't understand it !! Someone can explain ??
| 0
| 0
| 0
| 0
| 1
| 0
|
In 3D Max Studios, I recalled there is a function (I couldn't recall the name of the function, sorry) to cut a 3D model into two. For instance, you create a sphere, then you cut it in the middle, leaving 2 half spheres. Now, how about a human body? Imagine a samurai cutting up an enemy at the stomach, now that enemy's model will become 2 models: one is the upper part from head to stomach and another is the lower part from stomach to feet. Also, in order to prevent the player to see inside the cut model, an extra polygon will also have to be placed into both models. (and preferably, add blood texture in this polygon to give an impression that this is the blood inside the body) Sorry if my explanation is poor, do you understand what I am trying to say?
In 3D Max Studios, we can do above. Now, 3D programming-wise, what do I have to do if I want to achieve something like this? Sure, one cheap method is to have the cut models pre-made, but this is not realistic.
I am looking for a way to cut a model into two with respect to angle of cut in real time. Is this possible?
| 0
| 0
| 0
| 0
| 1
| 0
|
Anyone knows such a function in javascript?
| 0
| 0
| 0
| 0
| 1
| 0
|
I'm working on a project where I need to sort a list of user-submitted articles by their popularity (last week, last month and last year).
I've been mulling on this for a while, but I'm not a great statitician so I figured I could maybe get some input here.
Here are the variables available:
Time [date] the article was originally published
Time [date] the article was recommended by editors (if it has been)
Amount of votes the article has received from users (total, in the last week, in the last month, in the last year)
Number of times the article has been viewed (total, in the last week, in the last month, in the last year)
Number of times the article has been downloaded by users (total, in the last week, in the last month, in the last year)
Comments on the article (total, in the last week, in the last month, in the last year)
Number of times a user has saved the article to their reading-list (Total, in the last week, in the last month, in the last year)
Number of times the article has been featured on a kind of "best we've got to offer" (editorial) list (Total, in the last week, in the last month, in the last year)
Time [date] the article was dubbed 'article of the week' (if it has been)
Right now I'm doing some weighting on each variable, and dividing by the times it has been read. That's pretty much all I could come up with after reading up on Weighted Means. My biggest problem is that there are some user-articles that are always on the top of the popular-list. Probably because the author is "cheating".
I'm thinking of emphasizing the importance of the article being relatively new, but I don't want to "punish" articles that are genuinely popular just because they're a bit old.
Anyone with a more statistically adept mind than mine willing to help me out?
Thanks!
| 0
| 0
| 0
| 0
| 1
| 0
|
Does anybody know where I can find documentation on how to write annotation schemas for Callisto? I'm looking to write something a little more complicated than I can generate from a DTD -- that only gives me the ability to tag different kinds of text mentions. I'm looking to create a schema that represents a single type of relationship between five or six different kinds of textual mentions (and some of these types of mentions have attributes that I need to assign values to), and possibly having a second type of relationship between the first two instances of the first type of relationship.
(Alternatively, does anybody know of any software that would be better for this kind of schema? I've been looking at WordFreak, but it's a little clumsy, and it doesn't support attributes on its textual mentions.)
| 0
| 1
| 0
| 0
| 0
| 0
|
I have a set of variables X, Y, ..., Z. My job is to design a function that takes this set of variables and yields an integer. I have a fitness function to test this against.
My first stab at the problem is to assume that I can model f to be a linear function:
f(X, Y, ..., Z) -> aX + bY ... cZ
My first idea was to use either PSO (Particle Swarm Optimization) or Genetic algorithms to solve f for a, b, .., c and I am sure they'd sure yield good results.
On the other hand, I feel like maybe that kind of evolutionary algorithms aren't really needed. First of all, I can think of a couple of good "starting points" for a,b, .., c. Being f a linear function, shouldn't it be easier to just try out a couple of points and then do something like a linear regression to them? And after the linear regression, trying out a couple more points, this time closer to what looks like a good "spot", making again a linear regression on them?
What are the downsides of it? Anyone with experience in these kind of problems? The biggest one I can think of is that maybe what I consider good starting values for a,b, .., c may be a "local optima", and having some kind of evolutionary algorithm would yield me a global one.
f is supposed to be a approximation function for the Minimax algorithm of a Chess-like game, if that matters.
Thanks
| 0
| 1
| 0
| 0
| 0
| 0
|
I am currently implementing something quite similar to checkers. So, I have this table game and there are both white and black pieces. Where there are neither white or black pieces, you dno't have pieces.
I'm currently doing the GetValidMoves() method that'll return all the current moves one can do with the current board.
I am thus wondering what might be the best way to represent the board. The naive approach would be to have a matrix with 0's 1's and 2's (for no piece, white piece and black piece).
Other idea would be to instead of a matrix representation of the board, have 2 lists(or any other data-structure): one for black pieces, other for white.
I am implementing this game to test some AI algorithms, so my main concern is speed. I will basically put 2 AI players playing each other, for each turn each player should have a list of all his valid moves and then he'll choose what move to do, this always happening until the game ends(some player wins or there's a tie).
PS: I am not asking about the AI algorithm, I just want to know what would be the best data-structure to handle the board, so that it makes it easy to
Look for all the valid moves for the current player
Do a move
Verify the game is not over (it is over when one player lost all his pieces or one player reached the other side of the board).
| 0
| 1
| 0
| 0
| 0
| 0
|
Even though I consider myself one of the better programmers on my CompSci course, I am fascinated by people who are really good at math. I have to say whenever I had a math-type assignment or exam my approach has been very formulaic, i.e. if I encounter a problem that looks like A I must use method B and the result should look like C, or else I made a mistake. I only really know how to solve the problems I revised.
I'd really like to devote some time this summer to understand mathematical problems and their solutions better in order to dive deeper into fields of algorithmics and computational complexity.
Any tips?
| 0
| 0
| 0
| 0
| 1
| 0
|
For a problem that I'm working on right now, I would like a reasonably uniform random choice from the powerset of a given set. Unfortunately this runs right into statistics which is something that I've not studied at all (something that I need to correct now that I'm getting into real programming) so I wanted to run my solution past some people that know it.
If the given set has size n, then there are (n k) = n!/[k!(n-k)!] subsets of size k and the total size N of the powerset is given as the sum of (n k) over k from 0 to n. (also given as 2n but I don't think that that's of use here. I could was obviously be wrong).
So my plan is to partition [0, 1] into the intervals:
[0, (n 0)/N]
((n 0)/N, [(n 0) + (n 1)]/N]
([(n 0) + (n 1)]/N, [(n 0) + (n 1) + (n 2)]/N]
...
([N - (n n)]/N, 1]
Algorithmically, the intervals are constructed by taking the greatest element of the previous interval for the greatest lower bound of the new interval adding (n j)/N to it to obtain the greatest element. I hope that's clear.
I can then figure out how many elements are in the random subset by choosing a uniform float in [0, 1] and mapping it to the index of the interval that it belongs to. From there, I can choose a random subset of the appropriate size.
I'm pretty sure (from a merely intuitive perspective) that my scheme provides a uniform choice on the size of the subset (uniform relative to the total amount of subsets. It's plainly not uniform on the set {1, 2, .., n} of sizes).
I'm using a library (python's random.sample) to get the subset of the given size so I'm confident that that will be uniform.
So my question is if putting the two together in the way I'm describing makes the choice of random subset of random size uniform. If the answer is a lot of work, then I'm happy to accept pointers as to how this might be proven and do the work for myself. Also, if there's a better way to do this, then I would of course be happy with that.
| 0
| 0
| 0
| 0
| 1
| 0
|
Is the Minimax's evaluation function an Heuristic function?
| 0
| 1
| 0
| 0
| 0
| 0
|
I want to create an array that sends a pixel in a specific direction.
It has four options: Forward, Backward, Left, Right
Each direction has an associated score that is its probability of sending the pixel in a specific direction.
Example:
Direction[Forward] = 20;
Direction[Backward] = 12;
Direction[Left] = -5;
Direction[Right] = 2;
How can I make the number for each key in the direction array equal the probability of moving the pixel? I want the negative number to play into the probability albeit a small chance.
Currently I copy each direction key into a new array the number of times as it's probability score, then generate a random number against the new array, but that doesn't work on negative numbers, so I change them to +1 in advance. Obviously this doesn't scale on negative numbers.
How can I create a probability for each array key regardless if it's a positive or negative number?
| 0
| 0
| 0
| 0
| 1
| 0
|
I am writing a generic hash map in C++ which uses chaining to deal with collisions.
Say if I have a hash map with 11 buckets, and I insert 8 items. The hash function will distribute it as follows:
bucket[0] = empty
bucket[1] = 2 elements
bucket[2] = empty
bucket[3] = 1 element
bucket[4] = 1 element
bucket[5] = 3 elements
bucket[6] = empty
bucket[7] = 1 element
bucket[8] = empty
bucket[9] = empty
bucket[10] = empty
Calculating the spread over the buckets is 5/8 = 0.625.
But how do I calculate the spread taking the depth of the buckets into account?
I want to know this because:
Say if I added 20 elements, and every bucket has 1 element and the last bucket has 11 elements.
then the spread would be 1 if i calculate it the easy way, but this is obviously not correct! (the table resizes to avoid this of course, but I want to be able to show the spread) I want to use this information to be able to tune hash functions.
Thanks in advance!
| 0
| 0
| 0
| 0
| 1
| 0
|
I'm implementing readability test and have implemented simple algorithm of detecting sylables.
Detecting sequences of vowels I'm counting them in words, for example word "shoud" contains one sequence of vowels which is 'ou'. Before I'm counting them i'm removing suffixes like -les, -e, -ed (for example word "like" contains one syllable but two sequences of vowels, so this method works).
But...
Consider these words / sequences:
x-ray (it contains two syllables)
I'm (One syllable, maybe I may use removal of all apostrophes in the text?)
goin'
I'd've
n' (for example Pork n' Beans)
3rd (how to treat this ?)
12345
What to do with special characters? Remove them all? It will be ok for most of words, but not with "n'" and "x-ray". And how treat cyphers.
These are special cases of words but I'll be very glad to see some experience or ideas in this subject.
| 0
| 1
| 0
| 0
| 0
| 0
|
I want to colorize the words in a text according to their classification (category/declination etc). I have a fully working dictionary, but the problem is that there is a lot of ambiguity. foedere, for instance, can be forms of either the verb "fornicate" or the noun "treaty".
What the general strategies for solving these ambiguities or generating good guesses are?
Thanks!
| 0
| 1
| 0
| 0
| 0
| 0
|
I'm looking for a way to round a set of numbers to the nearest rational number while still preserving the total of the set.
I want to distribute the total value of '1' amongst a variable number of fields without allowing irrational numbers.
So say I want to distribute the total across three fields. I don't want each value to be 0.33333333333333333333'. I would prefer 0.33, 0.33 & 0.34.
I would like to implement this using Jquery/javascript. I have a form where fields are added dynamically. By default the total is evenly distributed amongst each field, however my problem is further complicated because often the value will not be shared equally amongst all fields. Some values might be more heavily weighted. So if I were to change one of the three values to 0.5, the other two values would be adjusted to 0.25.
Can anyone help with a suitable algorithm?
| 0
| 0
| 0
| 0
| 1
| 0
|
Now I need to create a data-mining task of my own.I already talked to some people,the most popular ideas would be price prediction or sport result prediction,which I think there are already plenty of people implementing them.
So could anyone give me some real-life ideas please that you found data-mining may be of use,like predicting what the customer would like to buy based on what they already purchased in a supermarket.
Any idea would be welcomed,thanks in advance.
| 0
| 1
| 0
| 0
| 0
| 0
|
Want to recalculate x and y in a centered layout relative to the current viewers resolution.
I have two numbers a set of coordinates x and y.
x=140
y=80
x and y was recorded in a resolution sessionWidth, sessionHeight
sessionWidth = 1024
sessionHeight = 400
Want to recalculate x and y so that they are relative to the viewers resolution.
currentViewWidth = 1280
currentViewHeight = 500
So want to plot x,y for a lot of coordinates (with different sessionWidth and SessionHeight) but want to normalize to the currentViewWidth and currentViewHeight.
currentViewWidth and currentViewHeight are constant.
How on earth do I do that - do you got a formula I can use?
Thanks a million.
| 0
| 0
| 0
| 0
| 1
| 0
|
Could anyone point me in the right direction on this. I have 20+ weeks to design and code a Texas Hold'em Poker Game in Java for an Android phone for a University Project. It should include AI opponents that play with you or just against themselves. They should be able to learn what strategies work best over a period of time, conservative, bluffing etc. I am in my final year and I don't want to make this too complex, I just need a simple, easy but effective AI system that can be played on a small device and be reasonably challenging. I have looked at a lot of theories and articles written about the possibilities that are available (including the University of Alberta), but I don't intend to have a world beating AI, just a simple one. And once that's out the way I can concentrate on the game gui quicker :). Any ideas?
Thanks in advance for any feedback!
| 0
| 1
| 0
| 0
| 0
| 0
|
I'm attempting to design an algorithm that does the following.
Input:
I've a set of keys (total n) that are mapped to set of properties. The properties contain the weight for each property and the value for the property.
Output:
Identify a set of keys that are qualified (total k) based on the set of properties and their respective weights and values.
Additionally, the data should be modified as such in every cycle of choosing winners such that the chances of someone who was not chosen goes up in the next cycle (whereas the chances of someone who has won would be as if they are completely new in the system).
Hopefully the issue at hand is clear. Basically, the value of the property and the respective weight would determine which keys are more likely to win (a higher value with a higher weight would increase the probability of that key winning) and we will eventually end up choosing everyone.
Any input on how this can be done would be greatly appreciated.
Thanks! - Azeem
| 0
| 0
| 0
| 0
| 1
| 0
|
I would be very glad if someone can make clear for me example mentioned ono wikipedia:
http://en.wikipedia.org/wiki/Earley_algorithm
consider grammar:
P → S # the start rule
S → S + M | M
M → M * T | T
T → number
and input:
2 + 3 * 4
Earley algorithm works like this:
(state no.) Production (Origin) # Comment
---------------------------------
== S(0): • 2 + 3 * 4 ==
(1) P → • S (0) # start rule
(2) S → • S + M (0) # predict from (1)
(3) S → • M (0) # predict from (1)
(4) M → • M * T (0) # predict from (3)
this is only first set S(0) but my question is:
why algorithm is predicting from (3) in step (4)
but it ommits prediction from (2)?
I hope somebody understands idea and may help me
| 0
| 1
| 0
| 0
| 0
| 0
|
I am trying to figure out a way to move two points, X and Y, independently of one another along the edges of an equilateral triangle with vertices A, B, and C. There are also some collision rules that need to be taken into account:
(1) If X is at a vertex, say vertex A, then Y cannot be on A or on the edges adjacent to it. i.e., Y can only be on vertices B or C or the edge BC.
(2) If X is on an edge, say AB, then Y cannot be on A, nor B, nor any of the edges adjacent to A and B. i.e., Y must be on vertex C
I have figured out how to move the two points along the triangle using a pair of sliders, but I can't figure out how to implement the collision rules. I tried using the Exclusions option for Slider but the results are not what I expect. I would prefer to drag the points along the triangle rather than using sliders, so if someone knows how to do that instead it would be helpful. Ideally, I would be able to
move the two points from a vertex to either one of the edges instead of coming to a stop at one of them. Here is my code so far.
MyTriangle[t_] :=
Piecewise[{{{-1, 0} + (t/100) {1, Sqrt[3]},
100 > t >= 0}, {{0, Sqrt[3]} + (t/100 - 1) {1, -Sqrt[3]},
200 > t >= 100},
{{1, 0} + (t/100 - 2) {-2, 0}, 300 >= t >= 0}}]
excluded[x_] := \[Piecewise] {
{Range[0, 99]~Join~Range[201, 299], x == 0},
{Range[0, 199], x == 100},
{Range[101, 299], x == 200},
{Range[0, 199]~Join~Range[201, 299], 0 < x < 100},
{Range[1, 299], 100 < x < 200},
{Range[0, 99]~Join~Range[101, 299], 200 < x < 300}
}
{Dynamic[t], Dynamic[x]}
{Slider[Dynamic[t], {0, 299, 1}, Exclusions -> Dynamic[excluded[x]]], Dynamic[t]}
{Slider[Dynamic[x], {0, 299, 1}, Exclusions -> Dynamic[excluded[t]]], Dynamic[x]}
Dynamic[Graphics[{PointSize[Large], Point[MyTriangle[t]],
Point[MyTriangle[x]],
Line[{{-1, 0}, {1, 0}, {0, Sqrt[3]}, {-1, 0}}]},
PlotRange -> {{-1.2, 4.2}, {-.2, 2}}]]
| 0
| 0
| 0
| 0
| 1
| 0
|
Shouldn't these two math problems give the same answer? Brackets/parenthesis are done first, right? so it should add them all, then divide it by 2, then subtract 10. The second answer below is the one giving me the correct value that I need, the other one gives a value that's a long ways off.
var pleft = $(this).offset().left + ($(this).width() /2) - ($("#question-wrapper").width() / 2) - 10;
var pleft = (($(this).offset().left + $(this).width() + $("#question-wrapper").width()) / 2) - 10;
| 0
| 0
| 0
| 0
| 1
| 0
|
I have a filter class wherein the user must declare the type (e.g. Filter<Double>, Filter<Float> etc). The class then implements a moving average filter so objects within the class must be added. My question is how to do this? I'm sorry if the answer is simple but I've muddled myself up by thinking about it too much I think :p.
public abstract class FilterData<T>
{
private final List<T> mFilter;
private T mFilteredValue; // current filtered value
protected Integer mSize = 10;
private T mUnfilteredValue; // current unfiltered value
public FilterData()
{
mFilter = new ArrayList<T>();
}
public FilterData(int size)
{
mSize = size;
mFilter = new ArrayList<T>(mSize);
}
public abstract T add(final T pFirstValue, final T pSecondValue);
@SuppressWarnings("unchecked")
public T filter(T currentVal)
{
T filteredVal;
mUnfilteredValue = currentVal;
push(currentVal);
T totalVal = (T) (new Integer(0));
int numNonZeros = 1;
for (int i = 0; i < mFilter.size(); ++i)
{
if (mFilter.get(i) != (T) (new Integer(0)))
{
++numNonZeros;
T totalValDouble = add(mFilter.get(i), totalVal);
totalVal = totalValDouble;
}
}
Double filteredValDouble = (Double) totalVal / new Double(numNonZeros);
filteredVal = (T) filteredValDouble;
mFilteredValue = filteredVal;
return filteredVal;
}
public T getFilteredValue()
{
return mFilteredValue;
}
public List<T> getFilterStream()
{
return mFilter;
}
public T getUnfilteredValue()
{
return mUnfilteredValue;
}
public void push(T currentVal)
{
mFilter.add(0, currentVal);
if (mFilter.size() > mSize)
mFilter.remove(mFilter.size() - 1);
}
public void resizeFilter(int newSize)
{
if (mSize > newSize)
{
int numItemsToRemove = mSize - newSize;
for (int i = 0; i < numItemsToRemove; ++i)
{
mFilter.remove(mFilter.size() - 1);
}
}
}
}
Am I right to include the abstract Add method and if so, how should I extend the class correctly to cover primitive types (e.g. Float, Double, Integer etc.)
Thanks
Chris
EDIT:
Apologies for being unclear. This is not homework I'm afraid, those days are long behind me. I'm quite new to Java having come from a C++ background (hence the expectation of easy operator overloading). As for the "push" method. I apologise for the add method in there, that is simply add a value to a list, not the variable addition I was referring to (made a note to change the name of my method then!). The class is used to provide an interface to construct a List of a specified length, populate it with variables and obtain an average over the last 'x' frames to iron out any spikes in the data. When a new item is added to the FilterData object, it is added to the beginning of the List and the last object is removed (provided the List has reached the maximum allowed size). So, to provide a continual moving average, I must summate and divide the values in the List.
However, to perform this addition, I will have to find a way to add the objects together. (It is merely a helper class so I want to make it as generic as possible). Does that make it any clearer? (I'm aware the code is very Mickey Mouse but I wanted to make it as clear and simple as possible).
| 0
| 0
| 0
| 0
| 1
| 0
|
Today I had my algorithms quiz for the semester and I can't figure out these two questions and they've been bugging me all day. I've gone through my notes and the lecture notes and I'm still unsure. I would appreciate it if someone could take a look and provide some insight into these questions. These are not homework and I've already sat the quiz.
True or False questions
1) [Paraphrased] The maximum number of edges in a bipartite graph with n vertices is n(n-1)/2.
I put this down as False, my logic is that n verticies means we have two n/2 rows. The first node has n/2 connections to the second row, the second row has n/2 connections to the second row... etc...
Hence, I calculated the maximum number of edges in a bipartite graph with n vertices to be (n^2/4).
2) [Paraphrased] Is it possible to take a cut, that is not necessarily the minimum s-t cut in a graph with directed flows (Ford–Fulkerson algorithm) such that the flow capacity is greater than the s-t cut capacity?
I put down false, but I don't understand the question... Is it possible to take an s-t cut such that the flow capacity is greater? I know the weak duality theorem and 'max flow = min cut' so I put down false, but I have no idea.
Short answer question:
1) Explain an efficient way to test weather a graph is connected.
I suggested doing a breadth first search and if there were nodes that were not found by the BFS algorithm in the graph, then it was not connected. I wrote down the running time was O(m+n) hence it was an efficient algorithm to use. It was worth two marks and it was the final question but I'm now worried it was a trick question.
2) In the graph:
List the sets of vertices which demonstrate minimum vertex cover [paraphrased]
My answer was {A, D}, {A, E}, {B, C}, {B, D}, {C, E}, but now I'm worried it was just {A}, {B}, {C}, {D}, {E}...
Thanks for taking the time to read! :)
| 0
| 0
| 0
| 0
| 1
| 0
|
I did a course at university that explained how (amongst other things) to order your mathematical execution to maximize precision and reduce the risk of rounding errors in a finite precision environment.
We are working on a financial system with your usual interest calculations and such. Can somebody please share/remind me how to structure your calculations as to minimze loss of precision?
I know that, for instance, division must be avoided. Also, when you divide, to divide the largest number first, if possible.
| 0
| 0
| 0
| 0
| 1
| 0
|
Given a starting point (origLat, origLon), ending point (destLat, destlon), and a % of trip completed. How do I calculate the current position (curLat, curLon)?
| 0
| 0
| 0
| 0
| 1
| 0
|
Here is a stumper for you math geeks out there.
I have a python list that is just a sequence that looks like this:
myList=[1,2,3,4,5,6,7,8,9,10,11,12,13,(...etc...),43]
Unfortunately, the data from which the list was generated was not zero-padded, and it should have been. So in reality:
1==1
2==10
3==11
4==12
5==13
6==14
7==15
8==16
9==17
10==18
11==19
12==2
13==20
14==21
etc. until
34==4
35==40
36==41
37==42
38==43
39==5
40==6
41==7
42==8
43==9
Is there a way that I can remap this list based on the pattern described above. Keep in mind that the list I expect can range from 10-90 items.
Thanks.
edit for clarification:
The list is derived from an XML file with a list of nodes in order:
<page>1</page>
<page>2</page>
etc...
The process that produced the XML used some input data that SHOULD have been zero-padded, but was not. So as a result, what is listed in the XML file as 2 should be interpreted as 10. I hope that helps.
| 0
| 0
| 0
| 0
| 1
| 0
|
I like doing part-time research in reinforcement learning. In recent years (up to 2009) there was a reinforcement learning competition held at rl-competition.org with some very interesting problems, but this seems to have been discontinued. I'd love to improve my skills and knowledge and measure it against other enthusiasts in the field - are there still any such competitions around?
| 0
| 1
| 0
| 0
| 0
| 0
|
I've written a simple graphing implementation in C#, and I can graph things by comparing each pixel to the position on the graph it represents and plugging that position into the function I have to see if it is on the curve. That's all well and good.
The problem I'm having is USING a generated taylor polynomial. For example, I am able to create the nth taylor polynomial of a transcendent function f centered at c by doing
summation of this from 0 to to n with the counter variable being k = ((kth derivative of f(c)) * (x-c)^k)/k!
I am not sure how to do math markup on stackoverflow nor am I too competent with doing that on the web, but I hope that is understandable. The left side could be written as sigma _k=0 ^n or something like that with _ representing the section under sigma and the ^ representing the part above...
So I end up generating a Taylor polynomial of the 6th degree for cos(x) centered at 0(maclaurin, I know) that looks something like
"1 - x^2/2! + x^4/4! - x^6/6!"
This can be done through simple string manipulation in C#. I can just loop through and add the next term to the string.
I really can't comprehend how I would actually be able to use the string as a function to compare to graph positions to see if that graph position is actually on this graph to therefore graph it. So essentially: How would I use a string as an actual mathematical function in C#, or is there a better way of doing this.
Really sorry if it's confusing...really trying my best to explain it in a way that people can help.
| 0
| 0
| 0
| 0
| 1
| 0
|
I would like to know the complete expansion of log(a + b).
For example
log(a * b) = log(a) + log(b);
log(a / b) = log(a) - log(b);
Similar to this, is there any expansion for log(a + b)?
| 0
| 0
| 0
| 0
| 1
| 0
|
Ideally I'm looking for a c# solution, but any help on the algorithm will do.
I have a 2-dimension array (x,y). The max columns (max x) varies between 2 and 10 but can be determined before the array is actually populated. Max rows (y) is fixed at 5, but each column can have a varying number of values, something like:
1 2 3 4 5 6 7...10
A 1 1 7 9 1 1
B 2 2 5 2 2
C 3 3
D 4
E 5
I need to come up with the total of all possible row-wise sums for the purpose of looking for a specific total. That is, a row-wise total could be the cells A1 + B2 + A3 + B5 + D6 + A7 (any combination of one value from each column).
This process will be repeated several hundred times with different cell values each time, so I'm looking for a somewhat elegant solution (better than what I've been able to come with). Thanks for your help.
| 0
| 0
| 0
| 0
| 1
| 0
|
I have the ball in the very right bottom side. When I click somewhere, I want to be able to figure out the direction I clicked and once the user starts to drag, I will calculate the distance. Once the user lets go of the mouse, I want to give the ball some velocity and have it move towards the direction it was first clicked.
I don't know the formulas to compute these things. Any help with explanation is greatly appreciated.
Thanks.
| 0
| 0
| 0
| 0
| 1
| 0
|
My very often used extension method is
public static double Pi(double this x) { return Math.PI*x; }
in order to have access to 2.0.Pi() or 0.5.Pi() .. etc
What are some other examples of mathematics related extension methods the people use often?
PS. Just curious.
| 0
| 0
| 0
| 0
| 1
| 0
|
Ok, first up, this is NOT for a class, test, or other student type activity.
I'm a scripter for a game, and am trying to implement the math library for all to use, and unfortunately, all I have available to me is very basic lua. The implemented version cannot be changed, and does not include any libraries. For those wondering, its for scripting in Fold.It.
Here's what I have...
math={}
math.fact = function(b) if(b==1)or(b==0) then return 1 end e=1 for c=b,1,-1 do e=e*c end return e end
math.pow = function(b,p) e=b if(p==0) then return 1 end if(p<0) then p=p*(-1) end for c=p,2,-1 do e=e*b end return e end
math.cos = function(b,p) e=0 p=p or 10 for i=1,p do e=e+(math.pow(-1,i)*math.pow(b,2*i)/math.fact(2*i)) end return e end
To clarify above, math.fact returns factorial, which is returning accurate to about 10 points of precision, and is a new function I've done to aid in cosine calculation.
The math.pow is also a new function to handle returning powers, also working as expected.
The issue is with the cosine function. Its returning unexpected values. Here's an easier to digest version (I've been writing my library stuff ultra lean)...
function math.cos(value,precision)
result=0
precision=precision or 10
for i=1,precision do
result=result+(math.pow(-1,i)*math.pow(value,2*i)/math.fact(2*i))
end
return e
end
The problem is, with those functions, for print(math.cos(90)) it returns 4.77135... when I'm expecting -0.44807... (based on calc in scientific mode, or using an online tool to cos(90)).
I'm also having issues with sin and tan, however they are similarly written to cos, which seems to have been done in many languages. If I can figure out what I'm doing wrong, I can get them all fixed.
EDIT: Corrected typo
| 0
| 0
| 0
| 0
| 1
| 0
|
I would like to plot how the amplitude and orientation of a 2D vector evolves over time. To do this I would like to create a graph reminiscent of the canonical E & B field graphs you may recall from an introductory electricity and magnetism class.
Specifically, I would like to connect my 2D vector points with a ribbon, so that they are easy to see. Is there a simple way to do this in MATLAB? quiver3 is pretty close, but it lacks the ribbon. Perhaps some sort of parametric surface?
| 0
| 0
| 0
| 0
| 1
| 0
|
Is it possible to prove by induction that the two's complement of any string of 0's will always result in 0, for all sequences of length n?
I'm trying to do this using the value formula, i.e.
value = -a_n-1 x 2^(n-1) + summation{i=0 to n} (a_i x 2^i), where n = number of bits in string
| 0
| 0
| 0
| 0
| 1
| 0
|
Does any standard specifies what should be the output?
For example this code:
#include <stdio.h>
#include <math.h>
int main(int argc, char** argv) {
float a = INFINITY;
float b = -INFINITY;
float c = NAN;
printf("float %f %f %f
", a, b, c);
printf("int %d %d %d
", (int) a, (int) b, (int) c);
printf("uint %u %u %u
", (unsigned int) a, (unsigned int) b, (unsigned int) c);
printf("lint %ld %ld %ld
", (long int) a, (long int) b, (long int) b);
printf("luint %lu %lu %lu
", (unsigned long int) a, (unsigned long int) b, (unsigned long int) c);
return 0;
}
Compiled on gcc version 4.2.1 (Apple Inc. build 5664) Target: i686-apple-darwin10
Outputs:
$ gcc test.c && ./a.out
float inf -inf nan
int -2147483648 -2147483648 -2147483648
uint 0 0 0
lint -9223372036854775808 -9223372036854775808 -9223372036854775808
luint 0 9223372036854775808 9223372036854775808
Which is quite weird. (int)+inf < 0 !?!
| 0
| 0
| 0
| 0
| 1
| 0
|
Following set is given:
X := {Horse, Dog}
Y := {Cat}
I define the set:
M := Pow(X) u {Y}
u for union
The resulting set of the power set operation is:
Px := {0, {Horse}, {Dog}, {Horse, Dog}}
0 for empty set
My question is referenced to the unio operation. How do I unite 0 and Y?
M := {{Horse, Cat}, {Dog, Cat}, {Horse, Dog, Cat}}
| 0
| 0
| 0
| 0
| 1
| 0
|
EDIT2:
New training set...
Inputs:
[
[0.0, 0.0],
[0.0, 1.0],
[0.0, 2.0],
[0.0, 3.0],
[0.0, 4.0],
[1.0, 0.0],
[1.0, 1.0],
[1.0, 2.0],
[1.0, 3.0],
[1.0, 4.0],
[2.0, 0.0],
[2.0, 1.0],
[2.0, 2.0],
[2.0, 3.0],
[2.0, 4.0],
[3.0, 0.0],
[3.0, 1.0],
[3.0, 2.0],
[3.0, 3.0],
[3.0, 4.0],
[4.0, 0.0],
[4.0, 1.0],
[4.0, 2.0],
[4.0, 3.0],
[4.0, 4.0]
]
Outputs:
[
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[1.0],
[1.0],
[0.0],
[0.0],
[0.0],
[1.0],
[1.0]
]
EDIT1:
I have updated the question with my latest code. I fixed few minor issues but I am still getting the same output for all input combinations after the network has learned.
Here is the backprop algorithm explained: Backprop algorithm
Yes, this is a homework, to make this clear right at the beginning.
I am supposed to implement a simple backpropagation algorithm on a simple neural network.
I have chosen Python as a language of choice for this task and I have chosen a neural network like this:
3 layers: 1 input, 1 hidden, 1 output layer:
O O
O
O O
There is an integer on both inptut neurons and 1 or 0 on an output neuron.
Here is my entire implementation (a bit long). Bellow it I will choose just shorter relevant snippets where I think an error could be located at:
import os
import math
import Image
import random
from random import sample
#------------------------------ class definitions
class Weight:
def __init__(self, fromNeuron, toNeuron):
self.value = random.uniform(-0.5, 0.5)
self.fromNeuron = fromNeuron
self.toNeuron = toNeuron
fromNeuron.outputWeights.append(self)
toNeuron.inputWeights.append(self)
self.delta = 0.0 # delta value, this will accumulate and after each training cycle used to adjust the weight value
def calculateDelta(self, network):
self.delta += self.fromNeuron.value * self.toNeuron.error
class Neuron:
def __init__(self):
self.value = 0.0 # the output
self.idealValue = 0.0 # the ideal output
self.error = 0.0 # error between output and ideal output
self.inputWeights = []
self.outputWeights = []
def activate(self, network):
x = 0.0;
for weight in self.inputWeights:
x += weight.value * weight.fromNeuron.value
# sigmoid function
if x < -320:
self.value = 0
elif x > 320:
self.value = 1
else:
self.value = 1 / (1 + math.exp(-x))
class Layer:
def __init__(self, neurons):
self.neurons = neurons
def activate(self, network):
for neuron in self.neurons:
neuron.activate(network)
class Network:
def __init__(self, layers, learningRate):
self.layers = layers
self.learningRate = learningRate # the rate at which the network learns
self.weights = []
for hiddenNeuron in self.layers[1].neurons:
for inputNeuron in self.layers[0].neurons:
self.weights.append(Weight(inputNeuron, hiddenNeuron))
for outputNeuron in self.layers[2].neurons:
self.weights.append(Weight(hiddenNeuron, outputNeuron))
def setInputs(self, inputs):
self.layers[0].neurons[0].value = float(inputs[0])
self.layers[0].neurons[1].value = float(inputs[1])
def setExpectedOutputs(self, expectedOutputs):
self.layers[2].neurons[0].idealValue = expectedOutputs[0]
def calculateOutputs(self, expectedOutputs):
self.setExpectedOutputs(expectedOutputs)
self.layers[1].activate(self) # activation function for hidden layer
self.layers[2].activate(self) # activation function for output layer
def calculateOutputErrors(self):
for neuron in self.layers[2].neurons:
neuron.error = (neuron.idealValue - neuron.value) * neuron.value * (1 - neuron.value)
def calculateHiddenErrors(self):
for neuron in self.layers[1].neurons:
error = 0.0
for weight in neuron.outputWeights:
error += weight.toNeuron.error * weight.value
neuron.error = error * neuron.value * (1 - neuron.value)
def calculateDeltas(self):
for weight in self.weights:
weight.calculateDelta(self)
def train(self, inputs, expectedOutputs):
self.setInputs(inputs)
self.calculateOutputs(expectedOutputs)
self.calculateOutputErrors()
self.calculateHiddenErrors()
self.calculateDeltas()
def learn(self):
for weight in self.weights:
weight.value += self.learningRate * weight.delta
def calculateSingleOutput(self, inputs):
self.setInputs(inputs)
self.layers[1].activate(self)
self.layers[2].activate(self)
#return round(self.layers[2].neurons[0].value, 0)
return self.layers[2].neurons[0].value
#------------------------------ initialize objects etc
inputLayer = Layer([Neuron() for n in range(2)])
hiddenLayer = Layer([Neuron() for n in range(100)])
outputLayer = Layer([Neuron() for n in range(1)])
learningRate = 0.5
network = Network([inputLayer, hiddenLayer, outputLayer], learningRate)
# just for debugging, the real training set is much larger
trainingInputs = [
[0.0, 0.0],
[1.0, 0.0],
[2.0, 0.0],
[0.0, 1.0],
[1.0, 1.0],
[2.0, 1.0],
[0.0, 2.0],
[1.0, 2.0],
[2.0, 2.0]
]
trainingOutputs = [
[0.0],
[1.0],
[1.0],
[0.0],
[1.0],
[0.0],
[0.0],
[0.0],
[1.0]
]
#------------------------------ let's train
for i in range(500):
for j in range(len(trainingOutputs)):
network.train(trainingInputs[j], trainingOutputs[j])
network.learn()
#------------------------------ let's check
for pattern in trainingInputs:
print network.calculateSingleOutput(pattern)
Now, the problem is that after learning the network seems to be returning a float number very close to 0.0 for all input combinations, even those that should be close to 1.0.
I train the network in 100 cycles, in each cycle I do:
For every set of inputs in the training set:
Set network inputs
Calculate outputs by using a sigmoid function
Calculate errors in the output layer
Calculate errors in the hidden layer
Calculate weights' deltas
Then I adjust the weights based on the learning rate and the accumulated deltas.
Here is my activation function for neurons:
def activationFunction(self, network):
"""
Calculate an activation function of a neuron which is a sum of all input weights * neurons where those weights start
"""
x = 0.0;
for weight in self.inputWeights:
x += weight.value * weight.getFromNeuron(network).value
# sigmoid function
self.value = 1 / (1 + math.exp(-x))
This how I calculate the deltas:
def calculateDelta(self, network):
self.delta += self.getFromNeuron(network).value * self.getToNeuron(network).error
This is a general flow of my algorithm:
for i in range(numberOfIterations):
for k,expectedOutput in trainingSet.iteritems():
coordinates = k.split(",")
network.setInputs((float(coordinates[0]), float(coordinates[1])))
network.calculateOutputs([float(expectedOutput)])
network.calculateOutputErrors()
network.calculateHiddenErrors()
network.calculateDeltas()
oldWeights = network.weights
network.adjustWeights()
network.resetDeltas()
print "Iteration ", i
j = 0
for weight in network.weights:
print "Weight W", weight.i, weight.j, ": ", oldWeights[j].value, " ............ Adjusted value : ", weight.value
j += j
The last two lines of the output are:
0.552785449458 # this should be close to 1
0.552785449458 # this should be close to 0
It actually returns the output number for all input combinations.
Am I missing something?
| 0
| 0
| 0
| 0
| 1
| 0
|
how to generate the more general, less general and equivalence relations from wordnet?
wordnet similarity in RitaWordnet gives a number like -1.0, 0.222 or 1.0 but how to arrive at the more general, less general relations between words? which tool would be ideal for that?
please help me
i get java.lang.NullPointerException, after it prints
"the holonyms are"
package wordnet;
import rita.wordnet.RiWordnet;
public class Main {
public static void main(String[] args) {
try {
// Would pass in a PApplet normally, but we don't need to here
RiWordnet wordnet = new RiWordnet();
wordnet.setWordnetHome("/usr/share/wordnet/dict");
// Demo finding parts of speech
String word = "first name";
System.out.println("
Finding parts of speech for " + word + ".");
String[] partsofspeech = wordnet.getPos(word);
for (int i = 0; i < partsofspeech.length; i++) {
System.out.println(partsofspeech[i]);
}
//word = "eat";
String pos = wordnet.getBestPos(word);
System.out.println("
Definitions for " + word + ":");
// Get an array of glosses for a word
String[] glosses = wordnet.getAllGlosses(word, pos);
// Display all definitions
for (int i = 0; i < glosses.length; i++) {
System.out.println(glosses[i]);
}
// Demo finding a list of related words (synonyms)
//word = "first name";
String[] poss = wordnet.getPos(word);
for (int j = 0; j < poss.length; j++) {
System.out.println("
Synonyms for " + word + " (pos: " + poss[j] + ")");
String[] synonyms = wordnet.getAllSynonyms(word, poss[j], 10);
for (int i = 0; i < synonyms.length; i++) {
System.out.println(synonyms[i]);
}
}
// Demo finding a list of related words
// X is Hypernym of Y if every Y is of type X
// Hyponym is the inverse
//word = "nurse";
pos = wordnet.getBestPos(word);
System.out.println("
Hyponyms for " + word + ":");
String[] hyponyms = wordnet.getAllHyponyms(word, pos);
//System.out.println(hyponyms.length);
//if(hyponyms!=null)
for (int i = 0; i < hyponyms.length; i++) {
System.out.println(hyponyms[i]);
}
System.out.println("
Hypernyms for " + word + ":");
String[] hypernyms = wordnet.getAllHypernyms(word, pos);
//if(hypernyms!=null)
for (int i = 0; i < hypernyms.length; i++) {
System.out.println(hypernyms[i]);
}
System.out.println("
Holonyms for " + word + ":");
String[] holonyms = wordnet.getAllHolonyms(word, pos);
//if(holonyms!=null)
for (int i = 0; i < holonyms.length; i++) {
System.out.println(holonyms[i]);
}
System.out.println("
meronyms for " + word + ":");
String[] meronyms = wordnet.getAllMeronyms(word, pos);
if(meronyms!=null)
for (int i = 0; i < meronyms.length; i++) {
System.out.println(meronyms[i]);
}
System.out.println("
Antonym for " + word + ":");
String[] antonyms = wordnet.getAllAntonyms(word, pos);
if(antonyms!=null)
for (int i = 0; i < antonyms.length; i++) {
System.out.println(antonyms[i]);
}
String start = "cameras";
String end = "digital cameras";
pos = wordnet.getBestPos(start);
// Wordnet can find relationships between words
System.out.println("
Relationship between: " + start + " and " + end);
float dist = wordnet.getDistance(start, end, pos);
String[] parents = wordnet.getCommonParents(start, end, pos);
System.out.println(start + " and " + end + " are related by a distance of: " + dist);
// These words have common parents (hyponyms in this case)
System.out.println("Common parents: ");
if (parents != null) {
for (int i = 0; i < parents.length; i++) {
System.out.println(parents[i]);
}
}
//wordnet.
// System.out.println("
Hypernym Tree for " + start);
// int[] ids = wordnet.getSenseIds(start,wordnet.NOUN);
// wordnet.printHypernymTree(ids[0]);
} catch (Exception e) {
e.printStackTrace();
}
}
}
| 0
| 1
| 0
| 0
| 0
| 0
|
Given the following definitions for x,y,z rotation matrices, how do I represent this as one complete matrix? Simply multiply x, y, & matrices?
X Rotation:
[1 0 0 0]
[0 cos(-X Angle) -sin(-X Angle) 0]
[0 sin(-X Angle) cos(-X Angle) 0]
[0 0 0 1]
Y Rotation:
[cos(-Y Angle) 0 sin(-Y Angle) 0]
[0 1 0 0]
[-sin(-Y Angle) 0 cos(-Y Angle) 0]
[0 0 0 1]
Z Rotation:
[cos(-Z Angle) -sin(-Z Angle) 0 0]
[sin(-Z Angle) cos(-Z Angle) 0 0]
[0 0 1 0]
[0 0 0 1]
Edit: I have a separate rotation class that contains an x, y, z float value, which I later convert to a matrix in order to combine with other translations / scales / rotations.
Judging from the answers here, I can assume that if I do something like:
Rotation rotation;
rotation.SetX(45);
rotation.SetY(90);
rotation.SetZ(180);
Then it's actually really important as to which order the rotations are applied? Or is it safe to make the assumption that when using the rotation class, you accept that they are applied in x, y, z order?
| 0
| 0
| 0
| 0
| 1
| 0
|
This is weird, so be patient while I try to explain.
Basic problem: I have a massive string -- it can be of varying lengths depending on the user. My job is to acquire this massive string depending on the user, then send it off to the other piece of software to make a tag cloud. If life were easy for me, I could simply send the whole thing. However, the tag cloud software will only accept a string that is 1000 words long, so I need to do some work on my string to send the most important words.
My first thought was to count each occurrence of the words, and throw all this into an array with each word's count, then sort.
array(517) (
"We" => integer 4
"Five" => integer 1
"Ten's" => integer 1
"best" => integer 2
"climbing" => integer 3
(etc...)
Form here, I create a new string and spit out each word times its count. Once the total string hits 1000 words long, I stop. This creates a problem.
Let's say the word "apple" shows up 900 times, and the word "cat" shows up 100 times. The resulting word cloud would consist of only two words.
My idea is to somehow spit out the words at some sort of ratio to the other words. My attempts so far have failed on different data sets where the ratio is not great -- especially when there are a lot of words at "1", thus making the GCD very low.
I figure this is a simple math problem I can't get my head around, so I turn to the oracle that is stackoverflow.
thanks in advance.
| 0
| 0
| 0
| 0
| 1
| 0
|
I am looking to perform a polynomial least squares regression and am looking for a C# library to do the calculations for me.
I pass in the data points and the degree of polynomal (2nd order, 3rd order, etc) and it returns either the C0, C1, C2 etc. constant values or the calculated values "predictions".
Note: I am using Least Squares to create some forecasting reports for disk usage, database size and table size.
| 0
| 0
| 0
| 0
| 1
| 0
|
I'm trying to convert this adaptive bayesian rating formula into PHP code: see here.
Here are the details of the various parts of the formula..
nvotes : total number of votes so far
nlinks : total number of links
nvotes(k) : number of votes cast to rth link.
deltarank(k, m) : rank increment caused by kth vote that is casted to mth link.
nsaves(i) : number of users that save ith link to their linkibol.
a : save exponent (an ad-hoc value close to 1)
age(i) : the difference (in days) between date link added and current date.
b : decay exponent (an ad-hoc value close to 0)
(full details of the formula can be found at http://blog.linkibol.com/2010/05/07/how-to-build-a-popularity-algorithm-you-can-be-proud-of/ - scroll down to the "How Do We Implement Popularity in linkibol?" section)
I can convert most of this function into PHP code easily, but the bit I'm not understanding is the sigma and deltarank bit. I'm not sure what that bit is supposed to do or what values to pass to k and m.
If anyone has any tips or could break the complex bit of the formula down that'd be great, then I can look at what would be the best way to implement it in PHP - there might be functions I could make use of etc..
| 0
| 0
| 0
| 0
| 1
| 0
|
I'm looking for a math JavaScript API to show formulas on a web site and allow users to input formulas too (for instance, using a <textarea/>).
The API should be able to parse text strings, such as 3x^2+2 or sqrt(x/(x+5)) building "automagically" fractional layouts, integral symbols and so on.
Thanks.
| 0
| 0
| 0
| 0
| 1
| 0
|
I am a final year CS student, and very interested about OCR and NLP stuffs.
The problem is I don't know anything about OCR yet and my project duration is only for 5 months. I would like to know OCR & NLP stuff that is viable for my project?
Is writing a (simple) OCR engine for a single language too hard for my project? What about adding a language support for existing FOSS OCR softwares?
| 0
| 1
| 0
| 0
| 0
| 0
|
Does anyone know of a semantic parser for the Russian language? I've attempted to configure the link-parser available from link-grammar site but to no avail.
I'm hoping for a system that can run on the Mac and generate either a prolog or lisp-like representation of the parse tree (but XML output is fine as well).
| 0
| 1
| 0
| 0
| 0
| 0
|
I have a bunch of floats and I want to round them up to the next highest multiple of 10.
For example:
10.2 should be 20
10.0 should be 10
16.7 should be 20
94.9 should be 100
I only need it to go from the range 0-100. I tried math.ceil() but that only rounds up to the nearest integer.
Thanks in advance.
| 0
| 0
| 0
| 0
| 1
| 0
|
I've updated page numbers in my word document using a formula. Now page numbers have series like 1,1,2,2,3,3..
But, the numbers in TOC are still same as previous. I've tried updating them using "Update field" option available in MS Word 2007 & 2010.
Can I use formula here also to change page numbers? if yes, how?
| 0
| 0
| 0
| 0
| 1
| 0
|
Many job sites have broken searches that don't let you narrow down jobs by experience level. Even when they do, it's usually wrong. This requires you to wade through hundreds of postings that you can't apply for before finding a relevant one, quite tedious. Since I'd rather focus on writing cover letters etc., I want to write a program to look through a large number of postings, and save the URLs of just those jobs that don't require years of experience.
I don't require help writing the scraper to get the html bodies of possibly relevant job posts. The issue is accurately detecting the level of experience required for the job. This should not be too difficult as job posts are usually very explicit about this ("must have 5 years experience in..."), but there may be some issues with overly simple solutions.
In my case, I'm looking for entry-level positions. Often they don't say "entry-level", but inclusion of the words probably means the job should be saved.
Next, I can safely exclude a job the says it requires "5 years" of experience in whatever, so a regex like /\d\syears/ seems reasonable to exclude jobs. But then, I realized some jobs say they'll take 0-2 years of experience, matches the exclusion regex but is clearly a job I want to take a look at. Hmmm, I can handle that with another regex. But some say "less than 2 years" or "fewer than 2 years". Can handle that too, but it makes me wonder what other patterns I'm not thinking of, and possibly excluding many jobs. That's what brings me here, to find a better way to do this than regexes, if there is one.
I'd like to minimize the false negative rate and save all the jobs that seem like they might not require many years of experience. Does excluding anything that matches /[3-9]\syears|1\d\syears/ seem reasonable? Or is there a better way? Training a bayesian filter maybe?
Edit: There's a similar, but harder problem, which would probably be more useful to solve. There are lots of jobs that just require an "engineering degree", as you just have to understand a few technical things. But searching for "engineering" gives you thousands of jobs, mostly irrelevant.
How do I narrow this down to just those jobs that require any engineering degree, rather than particular degrees, without looking at each myself?
| 0
| 1
| 0
| 0
| 0
| 0
|
I tried using 6%2, but its always giving the value as 2 and not 0. Why and how can I get a solution to this?
| 0
| 0
| 0
| 0
| 1
| 0
|
I'm struggling with this code right now. I want to determine whether an integer is divsible by 11. From what I have read, an integer is divisible to 11 when the sum (one time +, one time -) of its digits is divisible by 11.
For example: 56518 is divisible by 11, because 8-1+5-6+5 = 11, and 11 is divisible by 11.
How can i write this down in Haskell? Thanks in advance.
| 0
| 0
| 0
| 0
| 1
| 0
|
For a given bitpattern, is there a mathematical relationship between that pattern and the mirror image of that pattern?
E.g. we start with, say, 0011 1011. This mirrors to 1101 1100.
We can mechanically mirror the pattern easily enough.
But is there in fact a mathematical relationship between the pattern and its mirror?
| 0
| 0
| 0
| 0
| 1
| 0
|
I was looking for a open source library for generating automated summaries out of few words. For ex: if two qualities are given of a person a) good thinking skills b) bad handwriting, i need to generate a sentence like "Bob has good thinking skills however needs to improve on his handwriting". I need to know if any open source library could help me achieve it even partially.
Thanks for help!
-- Mohit
| 0
| 1
| 0
| 0
| 0
| 0
|
Lets say, we're calculating averages of test scores:
Starting Test Scores: 75, 80, 92, 64, 83, 99, 79
Average = 572 / 7 = 81.714...
Now given 81.714, is there a way to add a new set of test scores to "extend" this average if you don't know the initial test scores?
New Test Scores: 66, 89, 71
Average = 226 / 3 = 75.333...
Normal Average would be: 798 / 10 = 79.8
I've tried:
Avg = (OldAvg + sumOfNewScores) / (numOfNewScores + 1)
(81.714 + 226) / (3 + 1) = 76.9285
Avg = (OldAvg + NewAvg) / 2
(81.714 + 79.8) / 2 = 80.77
And neither come up the exact average that it "should" be. Is it mathematically possible to do this considering you don't know the initial values?
| 0
| 0
| 0
| 0
| 1
| 0
|
I am doing matrix operations on large matrices in my C++ program. I need to verify the results that I get, I used to use WolframAlpha for the task up until now. But my inputs are very large now, and the web interface does NOT accept such large values (textfield is limited).
I am looking for a better solution to quickly cross-check/do math problems.
I know there is Matlab but I have never used it and I don't know if thats what will suffice my needs and how steep the learning curve would be?
Is this the time to make the jump? or there are other solutions?
| 0
| 0
| 0
| 0
| 1
| 0
|
There are plenty of SO questions on weighted random, but all of them rely on the the bias going to the highest number. I want to bias towards the lowest.
My algorithm at the moment, is the randomly weighted with a bias towards the higher values.
double weights[2] = {1,2};
double sum = 0;
for (int i=0;i<2;i++) {
sum += weights[i];
}
double rand = urandom(sum); //unsigned random (returns [0,sum])
sum = 0;
for (int i=0;i<2;i++) {
sum += weights[i];
if (rand < sum) {
return i;
}
}
How could I convert this to bias lower values? Ie I want in a 100 samples, the weights[0] sample to be chosen 66% of the time; and weights[1] 33% of the time (ie the inverse of what they are now).
Hand example for Omni, ref sum - weights[x] solution
Original:
1 | 1 | 1%
20 | 21 | 20%
80 | 101 | 79%
Desired:
1 | ? | 79%
20 | ? | 20%
80 | ? | 1%
Now sum - weights[i]
100(101 - 1) | 100 | 50%
81(101 - 20) | 181 | 40%
21(101 - 80) | 202 | 10%
| 0
| 0
| 0
| 0
| 1
| 0
|
I need an algorithm for A mod B with
A is a very big integer and it contains digit 1 only (ex: 1111, 1111111111111111)
B is a very big integer (ex: 1231, 1231231823127312918923)
Big, I mean 1000 digits.
| 0
| 0
| 0
| 0
| 1
| 0
|
I am new to Natural Language Processing and I want to learn more by creating a simple project. NLTK was suggested to be popular in NLP so I will use it in my project.
Here is what I would like to do:
I want to scan our company's intranet pages; approximately 3K pages
I would like to parse and categorize the content of these pages based on certain criteria such as: HR, Engineering, Corporate Pages, etc...
From what I have read so far, I can do this with Named Entity Recognition. I can describe entities for each category of pages, train the NLTK solution and run each page through to determine the category.
Is this the right approach? I appreciate any direction and ideas...
Thanks
| 0
| 1
| 0
| 0
| 0
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.