Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I've got a table, let's call it `values` with a primary key and five integer fields, like this: ``` id val1 val2 val3 val4 val5 1 4 3 4 5 3 2 2 3 2 2 2 3 5 4 1 3 3 4 1 4 3 4 4 ``` Now I need to select all rows where at least any two of the five value fields got the value 4. So the result set should contain the first row (id=1) and the last row (id=4). I started with a simple OR condition but there are too many combinations. Then I tried a sub-select with HAVING and COUNT but no success. Any Ideas how to solve this?
You can use [**`VALUES`**](https://msdn.microsoft.com/en-us/library/dd776382.aspx) to construct an inline table containing your fields. Then query this table to get rows having at least two fields equal to 4: ``` SELECT * FROM mytable CROSS APPLY ( SELECT COUNT(*) AS cnt FROM (VALUES (val1), (val2), (val3), (val4), (val5)) AS t(v) WHERE t.v = 4) AS x WHERE x.cnt >= 2 ``` [**Demo here**](http://sqlfiddle.com/#!6/5219ad/1)
Although `cross apply` is fast, it might be marginally faster to simply use `case`: ``` select t.* from t where ((case when val1 = 4 then 1 else 0 end) + (case when val2 = 4 then 1 else 0 end) + (case when val3 = 4 then 1 else 0 end) + (case when val4 = 4 then 1 else 0 end) + (case when val5 = 4 then 1 else 0 end) ) >= 2; ``` I will also note that `case` is ANSI standard SQL and available in basically every database.
T-Sql: Select Rows where at least two fields matches condition
[ "", "sql", "t-sql", "" ]
I have two tables. Something like table1 and table2 given below: table1 has ID (primary key) and columns Aid, Bid and Cid which are primary key of table 2. ``` table1 ID Aid Bid Cid ----------------- 1 X Y Z 2 X Z Z 3 Y X X ----------------- table2 ID NAME ------------------ X Abc Y Bcd Z Cde ------------------ ``` I want a query which will fetch all columns from table1 this way (after replacing Aid , Bid and Cid with their corresponding names given in table2): ``` ID A B C 1 Abc Bcd Cde 2 Abc Cde Cde 3 Bcd Abc Abc ``` Can you please tell me the mysql query to do this.? Thank you very much for your answers. But am gettin this when I execute those queries : ``` +------+------+------+------+ | ID | A | B | C | +------+------+------+------+ | 3 | bcd | abc | abc | | 1 | abc | bcd | cde | | 2 | abc | cde | cde | +------+------+------+------+ ``` This query : `SELECT * FROM table1 JOIN table2 aa ON table1.Aid = aa.ID JOIN table2 bb ON table1.Bid = bb.ID JOIN table2 cc ON table1.Cid = cc.ID;` gives this result : ``` +------+------+------+------+------+------+------+------+------+------+ | ID | Aid | Bid | Cid | ID | NAME | ID | NAME | ID | NAME | +------+------+------+------+------+------+------+------+------+------+ | 3 | Y | X | X | Y | bcd | X | abc | X | abc | | 1 | X | Y | Z | X | abc | Y | bcd | Z | cde | | 2 | X | Z | Z | X | abc | Z | cde | Z | cde | +------+------+------+------+------+------+------+------+------+------+ ``` I think the query needs to be changed a bit..
This should work: ``` select table1.ID, a.NAME AS A, b.NAME AS B, c.NAME AS C from table1 join table2 a on table1.Aid = a.ID join table2 b on table1.Bid = b.ID join table2 c on table1.Cid = c.ID ``` Otherwise: ``` select table1.ID, a.NAME, b.NAME, c.NAME from table1 join (select * from table2) a on table1.Aid = a.ID join (select * from table2) b on table1.Bid = b.ID join (select * from table2) c on table1.Cid = c.ID ```
You can try this. `INNER JOIN` & `ORDER` - ``` SELECT a.ID, b.NAME, c.NAME, d.NAME FROM table1 a INNER JOIN table2 b ON b.ID = a.Aid INNER JOIN table2 c ON c.ID = a.Bid INNER JOIN table2 d ON d.ID = a.Aid ORDER BY a.ID ```
Mysql query with inner join
[ "", "mysql", "sql", "inner-join", "" ]
I'm trying to do the following. ``` UPDATE account SET new_finalleadsource = CASE WHEN new_jtrack_source IS NOT NULL AND new_jtracksource <> '' THEN new_jtrack_source CASE WHEN new_jtrackofflinesource IS NOT NULL AND new_jtrackofflinesource <> '' THEN new_jrackofflinesource CASE WHEN new_leadsource IS NOT NULL AND new_leadsource <> '' THEN new_leadsource ELSE NULL END ``` Not sure if case can be used in this way, i'm basically trying to update a column with a value from 1 of 3 other columns depending on the first one that has data. Thanks
You don't need multiple `CASE` statements, instead use single `Case` statement with multiple `WHEN` expression's Try this way ``` UPDATE account SET new_finalleadsource = CASE WHEN new_jtrack_source IS NOT NULL AND new_jtracksource <> '' THEN new_jtrack_source WHEN new_jtrackofflinesource IS NOT NULL AND new_jtrackofflinesource <> '' THEN new_jrackofflinesource WHEN new_leadsource IS NOT NULL AND new_leadsource <> '' THEN new_leadsource ELSE NULL END ```
This is not how the case statement works. The case statement is like a series of checks of WHEN THEN. Sort of like C switch with a break before each WHEN. So what you want is ``` UPDATE account SET new_finalleadsource = CASE WHEN new_jtrack_source IS NOT NULL AND new_jtracksource <> '' THEN new_jtrack_source WHEN new_jtrackofflinesource IS NOT NULL AND new_jtrackofflinesource <> '' THEN new_jrackofflinesource WHEN new_leadsource IS NOT NULL AND new_leadsource <> '' THEN new_leadsource ELSE NULL END ``` --- I think the following (based on Gordon's answer) is the most robust solution: ``` UPDATE account SET new_finalleadsource = COALESCE(NULLIF(LTRIM(RTRIM(new_jtrack_source)), ''), NULLIF(LTRIM(RTRIM(new_jrackofflinesource)), ''), NULLIF(LTRIM(RTRIM(new_leadsource)), '') ); ```
SQL UPDATE based on conditions of other columns
[ "", "sql", "sql-server", "" ]
I have a database with products. When I selected a product I want to get the next product in my database. For Example: I have the product with ID 188 and I want to get the next product with ID 167. Database: ``` SELECT * FROM product ORDER BY title ASC, description ASC +----+----------------------+----------------------+ | id | title | description | +----+----------------------+----------------------+ |111 | Adjusterhouse | Finetuning | +----+----------------------+----------------------+ |188 | Adjustment housing | Height | +----+----------------------+----------------------+ |167 | Adjustment housing | Oilpressure | +----+----------------------+----------------------+ |105 | Adjustment housing | Suspension | +----+----------------------+----------------------+ |102 | Adjustment nut L+R. | Packaging machinery | +----+----------------------+----------------------+ ``` But if I use the following SQL Statement I get the product with the ID 102 instead of 167. ``` SELECT * FROM product WHERE title > ? AND (title > ? OR description > ?) ORDER BY title ASC, description ASC LIMIT 1 ``` Can someone help me with this SQL statement?
You've got the right approach. Only one small change is required, changing that first comparison to be "*greater than* ***or equal to***" instead of just "*greater than*". As long as the `(title,description)` tuple is unique, that will work just fine. This is the pattern you want: ``` WHERE ( title >= ? ) AND ( title > ? OR description > ? ) ``` (The only change there is that you need a "`title`**`>=`**" comparison as the first condition in the `WHERE`. Everything else you have is fine. --- **If the `(title,description)` tuple is not guaranteed to be unique...** we'd need to add additional column to the ordering. The `id` column looks like an ideal candidate. That gets a little more complicated, but the pattern is similar... ``` WHERE ( title >= ? ) AND ( title > ? OR ( ( description >= ? ) AND ( description > ? OR id > ? ) ) ) ORDER BY title ASC, description ASC, id ASC ``` Note that we've nested the same pattern; the pattern used for `description` and `id` is the same as we used for `title` and `description`. The outer part is still the same pattern, but now it's on `title` and instead of just plain `description`, we've got the condition that works on the `(description,id)` tuple.
Your sorting on title with a `>`. Assuming you've got record 188, the title is `Adjustment housing` If you look you'll set the first record with a `title < 'Adjustment housing'` is in fact record 102. Change it to be : ``` SELECT * FROM product WHERE title >= ? AND description >= ? AND id != ? ORDER BY title ASC, description ASC, id ASC LIMIT 1 ``` Where the second `?` is replaced with the id 188 in your prepared statment parameter binding
mysql get next row ordered by multiple columns
[ "", "mysql", "sql", "database", "range", "row", "" ]
Basicly, I have a table with a priority attribute and a value, like ``` TargetID Priority_Column (int) Value_column 1 1 "value 1" 1 2 "value 2" 1 5 "value 5 2 1 "value 1" 2 2 "value 2" ``` I want to join another table with this table, ``` ID Name 1 "name 1" 2 "name 2" ``` but only using the row with highest priority. The result would be like ``` TargetID Priority_Column (int) Name Value_column 1 5 "name 1" "value 5" 2 2 "name 2" "value 2" ``` I can of course use a high-level language like python to compute highest priority row for each ID. But that looks inefficient, is there a way to directly do this in sql?
One way is using `outer apply`: ``` select t2.*, t1.* from table2 t2 outer apply (select top 1 t1.* from table1 t1 where t2.id = t1.targetid order by priority desc ) t1; ``` I should note that in SQL Server, this is often the most efficient method. It will take good advantage of an index on `table1(targetid, priority)`.
There are several options for this. Here's one using `row_number`: ``` select * from anothertable a join ( select *, row_number() over (partition by targetid order by priority desc) rn from yourtable) t on a.id = t.targetid and t.rn = 1 ```
table join with priority attribute
[ "", "sql", "sql-server", "" ]
I've searched this and found this problem, and the solution that worked for most people (using an outer join) is not working for me. I originally had an inner join, and switched it to an outer join but I am getting the same results. This is based off certain account numbers and it shows their total sales. If an account has 0 sales it does not show up, and I need it to show up. Here is my query. ``` Select a.accountnumber, SUM(a.totalsales) as Amount, c.companyname FROM Sales a LEFT OUTER JOIN Accounts c on (a.Accountnumber = c.Accountnumber) WHERE a.Salesdate between '1/1/2016' and '1/27/2016' AND a.Accountnumber in ('1','2','3','4') GROUP BY a.Accountnumber, c.companyname ``` And I'll get results like: ``` Accountnumber | Amount | Company 1 | 250.00 | A 3 | 500.00 | B ``` Since accountnumbers 2 and 4 dont have an amount, they are not showing up. I would like them to show up like ``` Accountnumber | Amount | Company 1 | 250.00 | A 2 | 0 | B 3 | 250.00 | C 4 | 0 | D ``` How can I achieve this? Any help would be appreciated. Thank you!
I think that `RIGHT JOIN` will not work, since there are conditions in `WHERE`. Try this: ``` SELECT c.accountnumber, COALESCE(SUM(a.totalsales),0) AS Amount, c.companyname FROM Accounts c LEFT OUTER JOIN Sales a ON a.Accountnumber = c.Accountnumber AND a.Salesdate BETWEEN '1/1/2016' AND '1/27/2016' WHERE c.Accountnumber IN ('1', '2', '3', '4') GROUP BY c.Accountnumber, c.companyname ``` Just to clarify, the problem is not which JOIN is used, it can be either, but using WHERE condition ON non-existing (NULL) values, since all not matched values from outer joined table are NULL anyway, any condition applied, practically make those joins inner joins (unless they are IS NULL conditions), see: <http://www.codeproject.com/Articles/33052/Visual-Representation-of-SQL-Joins>
You should have two options. 1. Modify the query to select from the Accounts table first and then join the Sales table afterwards. ``` FROM Accounts c LEFT OUTER JOIN Sales a on (a.Accountnumber = c.Accountnumber) ``` 2. Use a RIGHT join instead of a LEFT one. ``` FROM Sales a RIGHT OUTER JOIN Accounts c on (a.Accountnumber = c.Accountnumber) ```
Query to return sales excludes results that are 0
[ "", "mysql", "sql", "ssms", "" ]
I am using a Select query to select Members, a variable that serves as a unique identifier, and transaction date, a Date format (MM/DD/YYYY). ``` Select Members , transaction_date, FROM table WHERE Criteria = 'xxx' Group by Members, transaction_date; ``` My ultimate aim is to count the # of unique members by month (i.e., a unique member in day 3, 6, 12 of a month is only counted once). I don't want to select any data, but rather run this calculation (count distinct by month) and output the calculation.
This will give distinct count per month. `SQLFiddle Demo` ``` select month,count(*) as distinct_Count_month from ( select members,to_char(transaction_date, 'YYYY-MM') as month from table1 /* add your where condition */ group by members,to_char(transaction_date, 'YYYY-MM') ) a group by month ``` So for this input ``` +---------+------------------+ | members | transaction_date | +---------+------------------+ | 1 | 12/23/2015 | | 1 | 11/23/2015 | | 1 | 11/24/2015 | | 2 | 11/24/2015 | | 2 | 10/24/2015 | +---------+------------------+ ``` You will get this output ``` +----------+----------------------+ | month | distinct_count_month | +----------+----------------------+ | 2015-10 | 1 | | 2015-11 | 2 | | 2015-12 | 1 | +----------+----------------------+ ```
You might want to try this. This might work. SELECT REPLACE(CONVERT(DATE,transaction\_date,101),'-','/') AS [DATE], COUNT(MEMBERS) AS [NO OF MEMBERS] FROM BAR WHERE REPLACE(CONVERT(DATE,transaction\_date,101),'-','/') IN ( SELECT REPLACE(CONVERT(DATE,transaction\_date,101),'-','/') FROM BAR ) GROUP BY REPLACE(CONVERT(DATE,transaction\_date,101),'-','/') ORDER BY REPLACE(CONVERT(DATE,transaction\_date,101),'-','/')
Changing a Select Query to a Count Distinct Query
[ "", "sql", "postgresql", "" ]
I want to exclude the weekend and the holiday from my table: [![enter image description here](https://i.stack.imgur.com/XkE7H.png)](https://i.stack.imgur.com/XkE7H.png) for example in this picture I would like to exclude the date 19.01. and 10.01 from my table or it should show only 0 in 10.01.2016. this is my code: ``` SELECT * FROM ( Select intervaldate as Datum, tsystem.Name as Name, SUM(case when Name = 'Maschine 1' then Units else 0 end) as Maschine1, Sum(case when Name = 'Maschine 2' then Units else 0 end) as Maschine2, Sum(case when Name = 'Maschine 3' then Units else 0 end) as Maschine3, from Count inner join tsystem ON Count.systemid = tsystem.id where IntervalDate BETWEEN @StartDateTime AND @EndDateTime and tsystem.Name in ('M101','M102','M103','M104','M105','M107','M109','M110', 'M111', 'M113', 'M114', 'M115') group by intervaldate, tsystem.Name ) as s ```
I think the best approach is create a table in your database and store all weekend and holidays dates then use that table to filter your query. Something like this: ``` SELECT * FROM ( Select intervaldate as Datum, tsystem.Name as Name, SUM(case when Name = 'Maschine 1' then Units else 0 end) as Maschine1, Sum(case when Name = 'Maschine 2' then Units else 0 end) as Maschine2, Sum(case when Name = 'Maschine 3' then Units else 0 end) as Maschine3, from Count inner join tsystem ON Count.systemid = tsystem.id where IntervalDate BETWEEN @StartDateTime AND @EndDateTime and IntervalDate NOT IN (select WeekendOrHolidayDate from MyWeekendAndHolidayTable) and tsystem.Name in ('M101','M102','M103','M104','M105','M107','M109','M110', 'M111', 'M113', 'M114', 'M115') group by intervaldate, tsystem.Name ) as s ``` The below query will exclude weekdays only. There is no way to produce holidays from SQL if you want to exclude also holidays you have to store it somewhere in your database. ``` SELECT * FROM ( Select intervaldate as Datum, tsystem.Name as Name, SUM(case when Name = 'Maschine 1' then Units else 0 end) as Maschine1, Sum(case when Name = 'Maschine 2' then Units else 0 end) as Maschine2, Sum(case when Name = 'Maschine 3' then Units else 0 end) as Maschine3, from Count inner join tsystem ON Count.systemid = tsystem.id where IntervalDate BETWEEN @StartDateTime AND @EndDateTime and ((DATEPART(dw, IntervalDate) + @@DATEFIRST) % 7) NOT IN (0, 1) and tsystem.Name in ('M101','M102','M103','M104','M105','M107','M109','M110', 'M111', 'M113', 'M114', 'M115') group by intervaldate, tsystem.Name ) as s ``` Let me know if this helps you.
Right click the row group in your tablix and select Row Visibility. In the Change display options, select Show or hide based on an expression. In the expression, this should work: ``` =iif(CDate(Fields!Date.Value).DayOfWeek = DayOfWeek.Saturday, true, iif(CDate(Fields!Date.Value).DayOfWeek = DayOfWeek.Sunday, true, false)) ``` If there's a ton of data, for performance reasons I'd recommend trimming these rows on the database side.
exclude some data from a table SSRS
[ "", "sql", "reporting-services", "" ]
This might be a simple solution but its a headache to me from last couple of hours when i run this query ``` insert into payment set id ='13914', amount ='48.00', discount ='0.00', total ='48.00', fees ='2', charges ='', staus ='2' ``` The value `48.00` saves in db column as just `48`. db column type is double.
Easily found in [docs](http://dev.mysql.com/doc/refman/5.7/en/floating-point-types.html): > Because floating-point values are approximate and not stored as exact values, attempts to treat them as exact in comparisons may lead to problems. They are also subject to platform or implementation dependencies. Further [more](http://dev.mysql.com/doc/refman/5.7/en/problems-with-float.html): > Attempts to treat floating-point values as exact in comparisons may lead to problems. They are also subject to platform or implementation dependencies. The FLOAT and DOUBLE data types are subject to these issues. For DECIMAL columns, MySQL performs operations with a precision of 65 decimal digits, which should solve most common inaccuracy problems. So rather use `DECIMAL`
Your data is stored correctly. 48 == 48.00 when you use DOUBLE. When you retrieve your data try ``` SELECT ROUND(amount,2) amount, ROUND(discount, 2) discount FROM payment ``` if you really have to see the `.00` at the end of your numbers. And please, *please*, for the good of the profession and your future users, learn how floating point numbers work.
Saving a 48.00 value to MySQL data type double it saves as 48
[ "", "mysql", "sql", "floating-point", "" ]
What this is doing is selecting all columns from TABLE where a specific date time column is between last Sunday and this coming Saturday, 7 days total (no matter what day of the week you are running the query on) I would like to have help converting the below statement into Oracle since I found out that it will not work on Oracle. ``` SELECT * FROM TABLE WHERE DATE_TIME_COLUMN BETWEEN current date - ((dayofweek(current date))-1) DAYS AND current date + (7-(dayofweek(current date))) DAYS ```
After poking around a bit more I was able to find something that worked for my specific problem with no administrator restrictions for whatever reason: ``` SELECT * FROM TABLE WHERE DATE_TIME_COLUMN BETWEEN TIMESTAMPADD(SQL_TSI_DAY, DayOfWeek(Current_Date)*(-1) + 1, Current_Date) AND TIMESTAMPADD(SQL_TSI_DAY, 7 - DayOfWeek(Current_Date), Current_Date) ```
Use `TRUNC()` to truncate to the start of the week: ``` SELECT * FROM TABLE WHERE DATE_TIME_COLUMN BETWEEN trunc(sysdate, 'WW') and trunc(sysdate + 7, 'WW'); ``` `sysdate` is the current system date, `trunc` truncates a data, and `WW` tells it to truncate to the week (rather than day, year, etc.).
DB2 to Oracle Conversion For Basic Date Time Column Between Clause
[ "", "sql", "oracle", "date", "db2", "between", "" ]
My first post, so bear with me. I want to sum based upon a value that is broken by dates but only want the sum for the dates, not for the the group by item in total. Have been working on this for days, trying to avoid using a cursor but may have to. Here's an example of the data I'm looking at. BTW, this is in Oracle 11g. ``` Key Time Amt ------ ------------------ ------ Null 1-1-2016 00:00 50 Null 1-1-2016 02:00 50 Key1 1-1-2016 04:00 30 Null 1-1-2016 06:00 30 Null 1-1-2016 08:00 30 Key2 1-1-2016 10:00 40 Null 1-1-2016 12:00 40 Key1 1-1-2016 14:00 30 Null 1-2-2016 00:00 30 Key2 1-2-2016 02:00 35 ``` The final result should look like this: ``` Key Start Stop Amt ------ ---------------- ---------------- ----- Null 1-1-2016 00:00 1-1-2016 02:00 100 Key1 1-1-2016 04:00 1-1-2016 08:00 90 Key2 1-1-2016 10:00 1-1-2016 12:00 80 Key1 1-1-2016 14:00 1-2-2016 00:00 60 key2 1-2-2016 02:00 1-2-2016 02:00 35 ``` I've been able to get the Key to fill in the Nulls. The key isn't always entered in but is assumed to be the value until actually changed. ``` SELECT key ,time ,amt FROM ( SELECT DISTINCT amt, time, ,last_value(amt ignore nulls) OVER ( ORDER BY time ) key FROM sample ORDER BY time, amt ) WHERE amt > 0 ORDER BY time, key NULLS first; ``` But when I try to get just a running total, it sums on the key even with the breaks. I cannot figure out how to get it break on the key. Here's my best shot at it which isn't very good and doesn't work correctly. ``` SELECT key,time, amt , sum(amt) OVER (PARTITION BY key ORDER BY time) AS running_total FROM (SELECT key, time, amt FROM (SELECT DISTINCT amt, time, last_value(amt ignore nulls) OVER (ORDER BY time) key FROM sample ORDER BY time, amt ) WHERE amt > 0 ORDER BY time, key NULLS first ) ORDER BY time, key NULLS first; ``` Any help would be appreciated. Maybe using cursor is the only way. Match sample data.
In order to get the sums you are looking for you need a way to group the values you are interested in. You can generate a grouping ID by using the a couple of `ROW_NUMBER` analytic functions, one partitioned by the key value. However due to your need to duplicate the `KEY` column values this will need to be done in a couple of stages: ``` WITH t1 AS ( SELECT dta.* , last_value(KEY IGNORE NULLS) -- Fill in the missing OVER (ORDER BY TIME ASC) key2 -- key values FROM your_data dta ), t2 AS ( SELECT t1.* , row_number() OVER (ORDER BY TIME) -- Generate a - row_number() OVER (PARTITION BY key2 -- grouping ID ORDER BY TIME) gp FROM t1 ) SELECT t2.* , sum(amt) OVER (PARTITION BY gp, key2 ORDER BY TIME) running_sums FROM t2; ``` The above query creates a running sum of AMT that restarts every time the key value changes. Whereas the following query used in place of the last select statement above gives the requested results, which I wouldn't term a running sum. ``` SELECT key2 , MIN(TIME) start_time , MAX(TIME) stop_time , sum(amt) amt FROM t2 GROUP BY key2, gp; ``` To see the full time values you may want to either alter your session `NLS_DATE_FORMAT` as below: ``` ALTER SESSION SET NLS_DATE_FORMAT='DD-MM-RRRR HH24:MI:SS'; ``` Or wrap each date column in a `TO_CHAR` function for output purposes.
I'm not sure what your example data has to do with the queries (your sample data is one table, for instance and the sample queries have many queries). But, for assigning the keys, you can use `LAG()` with the `IGNORE NULLS` option: ``` select s.*, lag(key ignore nulls) over (order by start) as new_key from sample s; ``` Then, you want to group groups of the same key together. One method is a difference of row numbers. The final step is aggregation: ``` select new_key, min(time), max(time), sum(amount) from (select s.*, (row_number() over (order by start) - row_number() over (partition by new_key order by start) ) as grp from (select s.*, lag(key ignore nulls) over (order by start) as new_key from sample s ) s ) s group by new_key, grp; ```
Running total over repeating group by items based on time in Oracle SQL
[ "", "sql", "oracle", "sum", "window-functions", "" ]
I tried with below queries and bit confused about working of `EXISTS` clause, although I know it evaluates for `TRUE/FALSE`. Below #1 SQL sub-query anyhow returns 0, but still the result of the combined query is 1. ``` 1. SELECT 1 WHERE EXISTS ( SELECT 0 WHERE 1 = 1 ) 2. SELECT 1 WHERE EXISTS ( SELECT 0 WHERE 1 = 0 ) ``` Also, if `EXISTS` clause evaluates for `TRUE/FALSE`, then why does the below one not work? ``` SELECT 1 WHERE EXISTS ( 1 ) ``` Please help me understand this situation. **EDIT:** `EXISTS` clause evaluates for `TRUE/FALSE`. The condition is `TRUE` if a sub-query returns any result. Then why `EXISTS ( 1 )` does not work ? It is a `TRUE` condition.
From documentation [EXISTS](https://msdn.microsoft.com/en-us/library/ms188336.aspx): > Specifies a subquery to test for the existence of rows. ``` SELECT 1 WHERE EXISTS ( SELECT 0 WHERE 1 = 1 ) -- there is row SELECT 1 WHERE EXISTS ( SELECT 0 WHERE 1 = 0 ) -- no row returned by subquery SELECT 1 WHERE EXISTS ( 1 ) -- not even valid query `1` is not subquery ``` Keep in mind that it checks rows not values so: ``` SELECT 1 WHERE EXISTS ( SELECT NULL WHERE 1 = 1 ) -- will return 1 ``` `LiveDemo` **EDIT:** > This seems contradictory with the sentence " EXISTS clause evaluates for TRUE/FALSE" ? `EXISTS` operator tests for the existence of rows and it returns `TRUE/FALSE`. So if subquery returns: ``` ╔══════════╗ ╔══════════╗ ╔══════════╗ ╔══════════╗ β•‘ subquery β•‘ β•‘ subquery β•‘ β•‘ subquery β•‘ β•‘ subquery β•‘ ╠══════════╣ ╠══════════╣ ╠══════════╣ ╠══════════╣ β•‘ NULL β•‘ β•‘ 1 β•‘ β•‘ 0 β•‘ β•‘anything β•‘ β•šβ•β•β•β•β•β•β•β•β•β•β• β•šβ•β•β•β•β•β•β•β•β•β•β• β•šβ•β•β•β•β•β•β•β•β•β•β• β•šβ•β•β•β•β•β•β•β•β•β•β• ``` Then `EXISTS (subquery)` -> `TRUE`. If subquery returns (no rows): ``` ╔══════════╗ β•‘ subquery β•‘ β•šβ•β•β•β•β•β•β•β•β•β•β• ``` Then `EXISTS (subquery)` -> `FALSE`.
EXISTS returns true when the subquery within it has any rows. A logically equivalent (but not recommended) way of rewriting an EXISTS expression is: ``` SELECT 1 WHERE (SELECT COUNT(*) FROM (SELECT 0 WHERE 1 = 1)) > 0 ``` In this rewriting, your last query looks like: ``` SELECT 1 WHERE (SELECT COUNT(*) FROM 1) > 0 ``` which you should see doesn't make sense.
How does the EXISTS Clause work in SQL Server?
[ "", "sql", "sql-server", "" ]
## my table structure ``` id zoneid status 1 35 IN starting zone 2 35 OUT 1st trip has been started 3 36 IN 4 36 IN 5 36 OUT 6 38 IN last station zone 1 trip completed 7 38 OUT returning back 2nd trip has start 8 38 OUT 9 36 IN 10 36 OUT 11 35 IN when return back in start zone means 2nd trip complete 12 35 IN 13 35 IN 14 35 OUT 3rd trip has been started 15 36 IN 16 36 IN 17 36 OUT 18 38 IN 3rd trip has been completed 19 38 OUT 4th trip has been started 20 38 OUT 21 36 IN 22 36 OUT 23 35 IN 4th trip completed 24 35 IN ``` now i want a sql query, so i can count no of trips. i do not want to use status field for count **edit** i want result total trips where 35 is the starting point and 38 is the ending point(this is 1 trip), when again 35 occures after 38 means 2 trip and so on.
So you don't want to look at the status, but only look at the `zoneid` changes ordered by id. `zoneid` 36 is irrelevant, so we select 35 and 38 only, order them by id and count changes. We detect changes by comparing a record with the previous one. We can look into a previous record with LAG. ``` select sum(ischange) as trips_completed from ( select case when zoneid <> lag(zoneid) over (order by id) then 1 else 0 end as ischange from trips where zoneid in (35,38) ) changes_detected; ```
I am suggesting this without any testing. Does the following query produce the correct number of rows? Note if there is a date\_created (datetime) column then I would suggest using that column to order by instead of id. ``` select ca.in_id, t.id as out_id, ca.in_status, t.status as out_status from table1 t cross apply ( select top (1) id as in_id, status as in_status from table1 where table1.id < t.id and zoneid = 35 order by id DESC ) ca where t.zoneid = 38 /* and conditions for selecting one day only */ ``` If that logic is correct then just use COUNT(\*) instead of the column list. ``` CREATE TABLE Table1 ("id" int, "zoneid" int, "status" varchar(3), "other" varchar(54)) ; INSERT INTO Table1 ("id", "zoneid", "status", "other") VALUES (1, 35, 'IN', 'starting zone'), (2, 35, 'OUT', '1st trip has been started'), (3, 36, 'IN', NULL), (4, 36, 'IN', NULL), (5, 36, 'OUT', NULL), (6, 38, 'IN', 'last station zone 1 trip completed'), (7, 38, 'OUT', 'returning back 2nd trip has start'), (8, 38, 'OUT', NULL), (9, 36, 'IN', NULL), (10, 36, 'OUT', NULL), (11, 35, 'IN', 'when return back in start zone means 2nd trip complete'), (12, 35, 'IN', NULL), (13, 35, 'IN', NULL), (14, 35, 'OUT', '3rd trip has been started'), (15, 36, 'IN', NULL), (16, 36, 'IN', NULL), (17, 36, 'OUT', 'other'), (18, 38, 'IN', '3rd trip has been completed'), (19, 38, 'OUT', '4th trip has been started'), (20, 38, 'OUT', NULL), (21, 36, 'IN', NULL), (22, 36, 'OUT', NULL), (23, 35, 'IN', '4th trip completed'), (24, 35, 'IN', NULL) ; ```
count trip in sql server
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I'm working on a SQL quiz as follows: Write an SQL Statement to retrieve all thepeople who work on the same projects assmith with the same amount of hours withrespect to each Project. With the sampledata, only Smith and Brown should be retrieved. Oaks is disqualified since Oakshas worked on Project Y for 10 hours only (instead of 20 as smith has) The table: ``` | name | project | hours | |-------|---------|-------| | Smith | X | 10 | | Smith | Y | 20 | | Doe | Y | 20 | | Brown | X | 10 | | Doe | Z | 30 | | Chang | X | 10 | | Brown | Y | 20 | | Brown | A | 10 | | Woody | X | 10 | | Woody | Y | 10 | ``` I came up with this: ``` SELECT * INTO #temp FROM workson WHERE name='smith' SELECT * from workson as w WHERE project IN (SELECT project FROM #temp WHERE project=w.project AND hours=w.hours ) DROP TABLE #temp ``` Results: ``` name project hours Smith X 10 Smith Y 20 Doe Y 20 Brown X 10 Chang X 10 Brown Y 20 Woody X 10 ``` But the question expects only Smith and Brown to be returned. I can't figure out how to filter the others out in any kind of elegant way. Thanks.
``` select t1.* from workson t1 inner join workson t2 on t2.name = 'Smith' and t2.project = t1.project and t2.hours = t1.hours where t1.name in ( select i1.name from workson i1 inner join workson i2 on i2.name = 'Smith' and i2.project = i1.project and i2.hours = i1.hours group by i1.name having count(*) = (select count(*) from workson where name = 'Smith') ) ``` <http://sqlfiddle.com/#!3/74566/2/0>
I had some problems with the above answer, but it gave me a very good framework so I can't really take credit for this answer: ``` SELECT name, project, hours FROM workson w2 WHERE name IN (SELECT name FROM workson w INNER JOIN (SELECT project, hours FROM workson WHERE name = 'Smith') q1 ON q1.project = w.project AND q1.hours = w.hours GROUP BY w.name HAVING COUNT (*) = (SELECT COUNT(*) FROM workson WHERE name = 'Smith')) AND project IN (SELECT project FROM workson WHERE name = 'Smith') ```
SQL - Filtering result to match TWO columns in TWO different rows
[ "", "sql", "select", "" ]
[![enter image description here](https://i.stack.imgur.com/n7IZN.jpg)](https://i.stack.imgur.com/n7IZN.jpg) The following query I tried... ``` select d.deptID, max(tt.total) from dept d, (select d.deptID, d.deptName, sum(days) as total from vacation v, employee e, dept d where v.empId = e.empID and d.deptID = e.deptID group by d.deptID, d.deptName) tt where d.deptID = tt.deptID group by d.deptName; --having max(tt.total); ```
Try using limit since your inner query already does the calculation. ``` select TOP 1 * from ( select d.deptID, d.deptName, sum(days) as total from vacation v, employee e, dept d where v.empId = e.empID and d.deptID = e.deptID group by d.deptID, d.deptName) order by total desc; ``` Depends on the dbms you're using.. this is for mysql In oracle use where rownum = 1 In sql server use SELECT TOP 1 \*
Using Top: ``` Select top 1 with ties * from (Select D.DepartmentName, sum(V.Days) as SumDays from Vacations V inner join Employee E on E.EmployeeID=V.EmployeeID inner join Department D on D.DepartmentID=E.DepartmentID group by D.DepartmentName)SumDays Order by SumDays desc ```
What would be the result of the following query?
[ "", "sql", "" ]
I have three tables: tblAreas, which describes various areas of the UK; tblNewsletters, which lists quarterly dates when we publish newsletters; tblIssues, which is a many-to-many table linking the previous two. tbl Issues describes each newsletter produced by each area in each quarter (one newsletter per area per quarter). I want to find those areas which have not produced a newsletter in a given quarter. To make a start, I did not attempt to restrict output to a particular quarter, but couldn't even get that to work. Here is my code: ``` SELECT tblArea.ID, tblArea.AreaName FROM tblIssues WHERE NOT EXISTS (SELECT NewsletterLookup FROM tblIssues WHERE tblIssues.AreaLookup = tblArea.ID); ```
``` Select * from tblArea a left join tblIssues i on i.AreaId = a.Id where i.AreaId is null ``` For a specific quarter, you will have to use a subquery. ``` Select * from tblArea a left join (select AreaId from tblIssues ii inner join Newsletters n on n.Id = ii.NewsLetterId where n.IssueDate = #12/31/2015# ) i on i.AreaId = a.Id where i.AreaId is null ``` Not tested..sorry
You need to first build the cartesian product of all areas and newsletters, which is `SELECT a.ID, n.ID FROM tblAreas a, tblNewsletter n` *(just saw there is no `cross join` in access)* Then you need to cross out all issues, that do actually exist. Normally you would use a syntax like `MINUS SELECT i.AreaLookup, i.NewsletterID from tblIssues i` But that doesn't exist in MSAccess, so you could use a `NOT IN` workaround for that, which would look something like `WHERE a.id & "-" & n.ID NOT IN ( SELECT i.AreaLookup & "-" & i.NewsletterID FROM tblIssues i )`
Find those who have NOT written a letter
[ "", "sql", "ms-access", "vba", "" ]
I need to show related record id based document no as reference in same table. I try many time and direction but cannot get a right output. This below are table and data: [Table A](https://i.stack.imgur.com/l0aBE.png) Basically Record ID no 2,4 and 10 are related based on Reference Document No. For example if I select record id no 4 I still can list all related document from first to last transaction. I hope someone who cross this problem or anybody have a solution from SQL Statement or coding on .net please as long I can show this result.
I assume you're using postgres, since this is tagged with it. You can make a recursive query that will accomplish what you want when you're querying for a given docno: ``` WITH RECURSIVE t AS ( SELECT docno, refdocno FROM <table> WHERE docno = 'T0003' UNION SELECT blah.docno, blah.refdocno FROM <table> JOIN t ON t.docno = blah.refdocno OR t.refdocno = blah.docno ) SELECT * FROM t; ``` Note: you'll have to put the docno you're searching for in the with statement. If you need other columns, you can put them in there as well. PS. I assume that row 10's refdocno was supposed to be T0003 in your example
Are you wanting to link all documents on the parent document? If you wanted to, say get a document and find all related documents for docno XY001 (assuming XY001 is the parent Document otherwise link the other way around) You could use ``` SELECT * FROM TableA AS parentDoc LEFT OUTER JOIN TableA AS referentialDoc ON parentDoc.docno = referentialDoc.refdocno WHERE parentDoc.docno = XY001 ``` Of Course you can change the WHERE clause to ``` WHERE referentialDoc.docno IS NOT NULL ``` to show only those with a referential document. also note that this is only good for a structure of parent - child. for structure of grandparent-parent-child and more, you will need to either expand the Query or do it programmatically.
How To Find Related Record ID Based Reference Document No on SQL Statement
[ "", "sql", ".net", "postgresql", "relationship", "" ]
I've got a 2 part problem where through the use of google managed to find the answer to the first part at [SQL get the last date time record](https://stackoverflow.com/questions/16550703/sql-get-the-last-date-time-record) User **Osy** code work really well for me, Osy's code below! ``` select filename, dates, status from yt a where a.dates = (select max(dates) from yt b where a.filename = b.filename) ``` The query returns only the latest dates for each filename. If I could just stick to the same example question as url above. This is the table used in the example: `yt` table: ``` +---------+------------------------+-------+ |filename |Dates |Status | +---------+------------------------+-------+ |abc.txt |2012-02-14 12:04:45.397 |Open | |abc.txt |2012-02-14 12:14:20.997 |Closed | |abc.txt |2013-02-14 12:20:59.407 |Open | |dfg.txt |2012-02-14 12:14:20.997 |Closed | |dfg.txt |2013-02-14 12:20:59.407 |Open | +---------+------------------------+-------+ ``` The second part of the problem: What I am now trying to achieve is that I have a second table and would like to join the results from the query above on the filename and return the user. **Table2** ``` +--------+--------+ |filename |ref | +---------+--------+ |abc.txt |Heating | |dfg.txt |Cooling | +---------+---- ---+ ``` Result that I am trying to achieve from the query is as follows, using Osy's code above to return only the latest for each entry per device, and then to display the **ref** column and not display the **filename** Example: ``` +---------+------------------------+-------+ |ref |Dates |Status | +---------+------------------------+-------+ |Heating |2013-02-14 12:20:59.407 |Open | |Cooling |2013-02-14 12:20:59.407 |Open | +---------+------------------------+-------+ ``` I can use a inner join directly on the yt table but cannot get is to combine (nest) with the code from Osy above. Using SQL Server 2012. Please let me know if I left out anything. Thank you.
Continuing with your posted example code, again perform one more `JOIN` with `Table2` like ``` select t2.ref, xx.Dates, xx.Status from Table2 t2 join ( select filename, dates, status from yt a where a.dates = ( select max(dates) from yt b where a.filename = b.filename )) xx on t2.filename = xx.filename; ```
This might be faster way to do it. ``` SELECT [ref], [dates], [status] FROM (SELECT a.[filename], a.[dates], a.[status], t2.[ref], ROW_NUMBER() OVER (PARTITION BY a.[filename] ORDER BY a.[dates] DESC) [Rn] FROM yt a JOIN Table2 t2 ON a.[filename] = t2.[filename] ) t WHERE t.Rn = 1 ```
SQL Server query to join two tables
[ "", "sql", "sql-server", "date", "join", "" ]
This question might be a duplicate but I’ve tried all the answers of the other questions and none of the answers did help. I'm trying to compare a datetime value against a static one ( I need every record with a date lager then 1/1/2016 ) ``` CREATE PROCEDURE Cursusoverzicht as SELECT tblBijeenkomst.bijeenkomstdatum, tblCursussen.cursus_id, tblCursussen.cursustitel FROM tblCursussen INNER JOIN [dbo].[tblCursusDocenten] on [dbo].[tblCursusDocenten].[cursus_id] = [dbo].[tblCursussen].[cursus_id] INNER JOIN [dbo].[tblBijeenkomst] on [dbo].[tblBijeenkomst].[docent_id] = [dbo].[tblCursusDocenten].[docent_id] WHERE tblBijeenkomst.bijeenkomstdatum > '2016/1/1 00:00:00:000' ``` This keeps returning 0 records, anyone any idea? Sorry for the dutch names
It appears you are almost there: ``` CREATE PROCEDURE Cursusoverzicht as SELECT tblBijeenkomst.bijeenkomstdatum ,tblCursussen.cursus_id ,tblCursussen.cursustitel FROM tblCursussen INNER JOIN [dbo].[tblCursusDocenten] on [dbo].[tblCursusDocenten].[cursus_id] = [dbo].[tblCursussen].[cursus_id] INNER JOIN [dbo].[tblBijeenkomst] on [dbo].[tblBijeenkomst].[docent_id] = [dbo].[tblCursusDocenten].[docent_id] and tblBijeenkomst.bijeenkomstdatum > CONVERT (date, '2016-01-01T00:00:00:000') ```
Try adjusting the format of your datetime to 2016-01-01 00:00:00.000. This is the SQL Server default format. You can test this for yourself using the [GETDATE()](https://msdn.microsoft.com/en-GB/library/ms188383.aspx) function. This returns the current date and time as a [DATETIME](https://msdn.microsoft.com/en-GB/library/ms187819.aspx). ``` SELECT GETDATE() AS FormatTest ; ``` Returns ``` FormatTest ---------------- 2016-01-29 10:40:20.567 ```
Comparing datetime values in MS SQL
[ "", "sql", "sql-server", "database", "datetime", "" ]
I have two table like belew ``` Table1: CId -- Name -- Price -- MId 1 A 100 -- 1 2 B 110 -- 1 3 C 120 -- 1 4 D 120 -- 2 Table2: Id -- UserId -- CId -- Price 1 1 2 200 1 2 2 200 ``` I want to get data from Table one But if there is a record in Table2 that refrenced to Table1 CId then Price of Table2 replace with Price of Table1. For example my UserId is 1 AND MId is 1 if I get data by mentioned senario I should get in result; ``` 1 A 100 2 B 200 3 C 120 ```
You can get by `left join` where you check `null` value in second table. if second price is `null` then use `first table's price`. ``` SELECT t1.CId, t1.name CASE WHEN t2.price IS NULL THEN t1.price ELSE t2.price END AS Price FROM table1 t1 LEFT JOIN table2 t2 ON t1.CId = t2.CId WHERE WHERE t1.MId = 1 AND (t2.UserId = 1 OR t2.UserId IS NULL); ``` Try This Hopeful this will work.
[SQL FIDDLE](http://sqlfiddle.com/#!9/b26d2/1) try this ``` select t1.cid,t1.name, case when t2.cid is null then t1.price else t2.price end as Price from table1 t1 left join table2 t2 on t1.cid =t2.cid where t1.mid =1 AND (t2.UserId = 1 OR t2.UserId IS NULL); ```
join two table to replace new value from 2th table if exist
[ "", "mysql", "sql", "database", "join", "" ]
I'm using SQL Server and I'm having a difficult time trying to get the results from a SELECT query that I want. I've to select records from 3 tables given below: Client(clientID,name,age, dateOfBirth) Address(clientID, city, street ) Phone(ClientID, personalPhone, officePhone, homePhone) In my input I could have (dateOfBirth, steet, homePhone) and i need Disntinct ClientIDs in result. These input values are optional. its not mandatory that every time all these input values will have value, in some scenarios only street and homePhone is provied, or sometime only street is provided. There is "OR" relationship in the arguments! like if i pass homePhone even then record(s) should be returned.
This is almost the same as Mureinik's answer, but he didn't use left outer joins, so if you had a client with an address but without a phone number, they would be excluded from the result-set UNLESS you use an outer join: ``` SELECT DISTINCT client_id FROM client c LEFT OUTER JOIN address a ON c.client_id = a.client_id LEFT OUTER JOIN phone p ON c.client_id = p.client_id WHERE (@date_of_birth IS NULL OR c.date_of_birth = @date_of_birth) AND (@street IS NULL OR @street = a.street) AND (@home_phone IS NULL OR @home_phone = p.home_phone) ```
You could short circuit the logic using `or` conditions. Let's assume you denote the arguments with a `@`: ``` SELECT DISTINCT client_id FROM client c LEFT JOIN address a ON c.client_id = a.client_id LEFT JOIN phone p ON c.client_id = p.client_id WHERE (@date_of_birth IS NULL OR c.date_of_birth = @date_of_birth) AND (@street IS NULL OR @street = a.street) AND (@home_phone IS NULL OR @home_phone = p.home_phone) ```
Select from multiple tables with OR condtions
[ "", "sql", "sql-server", "" ]
I have a column name `source` which has values like `JBInfotech_CLC_4120_20160128`. How do I update the last character to `7`. There are hundreds of records I want to update at the same time. Which is these records: ``` SELECT * FROM [JBINFOTECH].[dbo].[leads] WHERE id <= 985 ORDER BY id DESC; ``` This is permanently updating the record not select.
You can try like this, ``` DECLARE @table TABLE ( col1 VARCHAR(100) ) INSERT INTO @table VALUES ('ABCDEF123'), ('JBInfotech_CLC_4120_20160128') SELECT * FROM @table UPDATE @table SET col1 = Stuff(col1, Len(col1), 1, '7') SELECT * FROM @table ```
Try: ``` update [JBINFOTECH].[dbo].[leads] Set [Source]=Concat(Left([Source],len([Source])-1), '7') WHERE id <= 985 ```
SQL Server update query replace last character of a value
[ "", "sql", "sql-server", "sql-update", "" ]
I am using SQL Server Express 2014 and I need to pull out the last record for few (3 for now) tags with different IDs from one table. So far I made it but not at all. I am using ``` SELECT TOP 1 [TagItemId], [TagValue] FROM [DB].[dbo].[Table] where [TagItemId] like 'Random.Int1' order by [TagTimestamp] desc SELECT TOP 1 [TagItemId], [TagValue] FROM [DB].[dbo].[Table] where [TagItemId] like 'Random.Int2' order by [TagTimestamp] desc SELECT TOP 1 [TagItemId], [TagValue] FROM [DB].[dbo].[Table] where [TagItemId] like 'Random.Int3' order by [TagTimestamp] desc ``` and the result is what I need, but not exactly. I need to get the three results in single table like: ``` TagItemId TagValue Random.Int1 55 Random.Int2 75 Random.Int3 23` ``` and not like: ``` TagItemId TagValue Random.Int1 55 TagItemId TagValue Random.Int2 75 TagItemId TagValue Random.Int3 23` ``` The reason is that I need to use the data for a chart. Best regards and thanks!
You could do this using Row\_Number ``` SELECT [TagItemId], [TagValue] FROM ( SELECT [TagItemId], [TagValue], ROW_NUMBER() OVER (PARTITION BY [TagItemId] ORDER BY [TagTimestamp] DESC) Rn FROM [DB].[dbo].[Table] WHERE [TagItemId] IN ('Random.Int1','Random.Int2','Random.Int3') ) t WHERE Rn = 1 ```
There are several ways to accomplish this: ``` SELECT MT.TagItemID, MT.TagValue FROM My_Table MT INNER JOIN ( SELECT TagItemID, MAX(TagTimestamp) FROM My_Table WHERE MT.TagItemID IN ('Random.Int1', 'Random.Int2', 'Random.Int3') GROUP BY TagItemID) SQ ON SQ.TagItemID = MT.TagItemID WHERE MT.TagItemID IN ('Random.Int1', 'Random.Int2', 'Random.Int3') ``` Or: ``` SELECT MT.TagItemID, MT.TagValue FROM My_Table MT WHERE MT.TagItemID IN ('Random.Int1', 'Random.Int2', 'Random.Int3') AND NOT EXISTS (SELECT * FROM My_Table MT2 WHERE MT2.TagItemID = MT.TagItemID AND MT2.Timestamp > MT.Timestamp) ``` Or: ``` ;WITH CTE_WithRowNums AS ( SELECT MT.TagItemID, MT.TagValue, ROW_NUMBER() OVER(PARTITION BY TagItemID ORDER BY Timestamp DESC) AS row_num FROM My_Table MT ) SELECT TagItemID, TagValue FROM CTE_WithRowNums WHERE row_num = 1 ```
Need to join 3 select queries that refers to same table
[ "", "sql", "sql-server", "select", "" ]
Consider a table T, with columns 1. ID = e.g Customer ID 2. Expense = amount spent on buying an item 3. Date = Date of transaction 4. Item = item bought I want to perform following select operation on T. I want to find for each ID, the most expensive item that was bought on an earliest date. For example if the table had three records like as follows ``` ID Expense Date item 1 1000 10/20/2015 A 1 1000 10/21/2015 B 1 200 10/15/2015 C ``` It should pick the first row. I wrote something like the following but it does not seem to work ``` select T.id. T.expense, T.date, T.item from T inner join (select id, max(expense), min(date) from T group by id) w on T.id = w.id and T.expense=w.expense and T.date=w.date; ``` Please give some suggestions. Thanks
Try: ``` Select * from (select row_number() over (partition by ID order by Expense desc, date asc) RN, id, expense, date, item from T)T where RN=1 ```
**Edit:** This answer is wrong, because the question was originally tagged wrong. I'd edit, but there is already another correct answer here. I'm leaving this here because it still demonstrates a viable solution that even works on MySql, but you should really vote for the `row_number()` solution. --- Think of doing this in three steps. First you need to find out what the max expense will be. Then you need to find the corresponding min date. Finally, you can get the whole record that matches those values. If it's possible to have multiple records for an ID that match both the max and the min, you'll need an additional step to get this down to a single record. Each of these steps can be accomplished via a JOIN on a subquery: ``` select .* FROM T tf --final INNER JOIN ( select Td.ID, Td.expense, MIN(Td.date) MinDate from T td --date INNER JOIN ( select ID, MAX(expense) MaxExpense from T te --expense GROUP BY ID ) e on td.ID = e.ID AND td.expense = e.MaxExpense GROUP BY ID, expense ) d ON tf.ID = d.ID AND tf.expense = d.expense AND tf.date = d.MinDate ``` Again, this is much simpler with a DB engine that supports Window functions or the APPLY operator. Here's an APPLY example you could write with Sql Server: ``` SELECT t1.* FROM T t1 CROSS APPLY ( SELECT TOP 1 * FROM T t2 WHERE t2.ID = t1.ID ORDER BY t2.expense DESC, t1.Date ) a ``` This feature is supported by Sql Server, Oracle, and Postgresql, and has been part of the ansi standard for more than 10 years now, and it's just one reason among several the MySql is quickly becoming a joke among anyone who has actually worked with more than one kind of database.
SQL query for the given select operation
[ "", "sql", "db2", "" ]
I have this query, ``` SELECT * FROM users WHERE user_ip IN (SELECT user_ip FROM users GROUP BY user_ip having count(*) > 1) ORDER BY user_ip ``` This works to list all users which has at least 1 repeated IP with another user. I need to order all users by total of repeated IP. ex. this users table ``` id, username, ip 1, user1, 1.1.1.1 2, user2, 2.2.2.2 3, user3, 1.1.1.1 4, user4, 4.4.4.4 5, user5, 2.2.2.2 6, user6, 2.2.2.2 ``` should print, ``` ip, username, total 2.2.2.2, user2, 3 2.2.2.2, user5, 3 2.2.2.2, user6, 3 1.1.1.1, user1, 2 1.1.1.1, user3, 2 4.4.4.4, user4, 1 ```
Here is an approach which uses an `INNER JOIN`: ``` SELECT u1.ip, u1.username, u2.total FROM users u1 INNER JOIN ( SELECT ip, COUNT(*) AS total FROM users GROUP BY ip ) u2 ON u1.ip = u2.ip ORDER BY u2.total DESC ``` Click the link below for a running demo: [**SQLFiddle**](http://sqlfiddle.com/#!9/b3982/2)
``` SELECT ip, username, count(*) total FROM user_ip WHERE ip in ( SELECT ip FROM user_ip GROUP BY 1 HAVING count(*) > 1 ) GROUP BY 1,2 ORDER BY 3 DESC,1,2 ```
List users with the same IP
[ "", "mysql", "sql", "" ]
I'm trying to compare two "lists" in same table and get records where `customerId` exists but `storeid` doesn't exist for that `customerid`. Lists (table definition) ``` name listid storeid customerid BaseList 1 10 100 BaseList 1 11 100 BaseList 1 11 102 NewList 2 11 100 NewList 2 12 102 NewList 2 12 103 ``` **Query:** ``` SELECT NewList.* FROM Lists NewList LEFT JOIN Lists BaseList ON BaseList.customerid = NewList.customerid WHERE BaseList.listid = 1 AND NewList.listid = 2 AND NewList.storeid <> BaseList.storeid AND NOT EXISTS (SELECT 1 FROM Lists c WHERE BaseList.customerid = c.customerid AND BaseList.storeid = c.storeid AND c.listid = 2) ``` Current result: ``` NewList 2 11 100 NewList 2 12 102 ``` But i'm expecting to only get the result ``` NewList 2 12 102 ``` as customerid 100 with storeid 11 exists. [Fiddle](http://www.sqlfiddle.com/#!6/a389de/3)
If the table definition contains a column `Name` (as you said), then the statement below returns your result. I didn't understand your select statement. ``` SELECT * from @table WHERE NAME = 'NewList' AND customerID IN (SELECT CustomerID FROM @table WHERE NAME = 'BaseList') AND storeID NOT IN (SELECT storeID FROM @table WHERE NAME = 'BaseList') ```
This dynamic pivot will show you all your list values and where else the same combination exists: I add one more group: ``` insert into Lists(name, listid, storeid, customerid) values('AnotherNew',3,11,100); insert into Lists(name, listid, storeid, customerid) values('AnotherNew',3,11,102); insert into Lists(name, listid, storeid, customerid) values('AnotherNew',3,10,100); ``` Here's the statement: EDIT: This new statement is - I think - better as it comes over the distinct combinations of customerid and storeid ``` DECLARE @listNames VARCHAR(MAX)= STUFF( ( SELECT DISTINCT ',[' + name + ']' FROM Lists FOR XML PATH('') ),1,1,''); DECLARE @SqlCmd VARCHAR(MAX)= ' WITH DistinctCombinations AS ( SELECT DISTINCT customerid,storeid FROM Lists AS l ) SELECT p.* FROM ( SELECT DistinctCombinations.* ,OtherExisting.name AS OtherName ,CASE WHEN l.listid IS NULL THEN '''' ELSE ''X'' END AS ExistingValue FROM DistinctCombinations LEFT JOIN Lists AS l ON DistinctCombinations.customerid=l.customerid AND DistinctCombinations.storeid=l.storeid OUTER APPLY ( SELECT x.name FROM Lists AS x WHERE x.customerid=l.customerid AND x.storeid=l.storeid ) AS OtherExisting ) AS tbl PIVOT ( MIN(ExistingValue) FOR OtherName IN (' + @ListNames + ') ) AS p'; EXEC(@SqlCmd); ``` The result ``` customerid storeid AnotherNew BaseList NewList 100 10 X X NULL 100 11 X X X 102 11 X X NULL 102 12 NULL NULL X 103 12 NULL NULL X ``` This is the approach before: ``` DECLARE @listNames VARCHAR(MAX)= STUFF( ( SELECT DISTINCT ',[' + name + ']' FROM Lists FOR XML PATH('') ),1,1,''); DECLARE @SqlCmd VARCHAR(MAX)= ' WITH DistinctLists AS ( SELECT DISTINCT listid FROM Lists AS l ) SELECT p.* FROM ( SELECT l.* ,OtherExisting.name AS OtherName ,CASE WHEN l.listid IS NULL THEN '''' ELSE ''X'' END AS ExistingValue FROM DistinctLists INNER JOIN Lists AS l ON DistinctLists.listid= l.listid CROSS APPLY ( SELECT x.name FROM Lists AS x WHERE x.listid<>l.listid AND x.customerid=l.customerid AND x.storeid=l.storeid ) AS OtherExisting ) AS tbl PIVOT ( MIN(ExistingValue) FOR OtherName IN (' + @ListNames + ') ) AS p'; EXEC(@SqlCmd); ``` And that is the result: ``` name listid storeid customerid AnotherNew BaseList NewList AnotherNew 3 10 100 NULL X NULL AnotherNew 3 11 100 NULL X X AnotherNew 3 11 102 NULL X NULL BaseList 1 10 100 X NULL NULL BaseList 1 11 100 X NULL X BaseList 1 11 102 X NULL NULL NewList 2 11 100 X X NULL ```
Compare records in database where one column exists but another doesn't
[ "", "sql", "sql-server", "" ]
I'm joining 2 tables and displaying data for InvNumber, InvAmount and JobNumber. I only need to display InvNumber and InvAmount in the first row. The Invoice has multiple Job numbers which should be displayed. HEre is the DDL ``` DECLARE @Date datetime; SET @Date = GETDATE(); DECLARE @TEST_DATA TABLE ( DT_ID INT IDENTITY(1,1) NOT NULL PRIMARY KEY CLUSTERED ,InvNumber VARCHAR(10) NOT NULL ,InvAmount VARCHAR(10) NOT NULL ,JobNumber VARCHAR(10) NOT NULL ); INSERT INTO @TEST_DATA (InvNumber, InvAmount,JobNumber) VALUES ('70001', '12056','J65448') ,('70001', '12056','J12566') ,('70001', '12056','J35222') ,('70001', '12056','J45222') ,('70001', '12056','456855') ,('70001', '12056','J55254') ; SELECT J.DT_ID ,InvNumber ,InvAmount ,JobNumber FROM @TEST_DATA AS J ```
You can do something like this with ROW\_NUMBER and CASE expressions. ``` SELECT DT_ID, CASE WHEN RN = 1 THEN InvNumber ELSE '' END InvNumber, CASE WHEN RN = 1 THEN InvAmount ELSE '' END InvAmount, JobNumber FROM (SELECT DT_ID, InvNumber, InvAmount, JobNumber, ROW_NUMBER() OVER (PARTITION BY InvNumber,InvAmount ORDER BY DT_ID) RN FROM @TEST_DATA ) j ```
you can't really easily do that with SQL. You can use the GROUP BY clause, in the select statement, to group the JobNumbers by Invoice though.
Display only first row in SQL
[ "", "sql", "sql-server", "t-sql", "" ]
I am trying to find the number of sellers that made a sale last month but didn't make a sale this month. I have a query that works but I don't think its efficient and I haven't figured out how to do this for all months. ``` SELECT count(distinct user_id) as users FROM transactions WHERE MONTH(date) = 12 AND YEAR(date) = 2015 AND transactions.status = 'COMPLETED' AND transactions.amount > 0 AND transactions.user_id NOT IN ( SELECT distinct user_id FROM transactions WHERE MONTH(date) = 1 AND YEAR(date) = 2016 AND transactions.status = 'COMPLETED' AND transactions.amount > 0 ) ``` The structure of the table is: ``` +---------+------------+-------------+--------+ | user_id | date | status | amount | +---------+------------+-------------+--------+ | 1 | 2016-01-01 | 'COMPLETED' | 1.00 | | 2 | 2015-12-01 | 'COMPLETED' | 1.00 | | 3 | 2015-12-01 | 'COMPLETED' | 2.00 | | 1 | 2015-12-01 | 'COMPLETED' | 3.00 | +---------+------------+-------------+--------+ ``` So in this case, users with ID `2` and `3`, didn't make a sale this month.
Use conditional aggregation: ``` SELECT count(*) as users FROM ( SELECT user_id FROM transactions -- 1st of previous month WHERE date BETWEEN SUBDATE(SUBDATE(CURRENT_DATE, DAYOFMONTH(CURRENT_DATE)-1), interval 1 month) -- end of current month AND LAST_DAY(CURRENT_DATE) AND transactions.status = 'COMPLETED' AND transactions.amount > 0 GROUP BY user_id -- any row from previous month HAVING MAX(CASE WHEN date < SUBDATE(CURRENT_DATE, DAYOFMONTH(CURRENT_DATE)-1) THEN date END) IS NOT NULL -- no row in current month AND MAX(CASE WHEN date >= SUBDATE(CURRENT_DATE, DAYOFMONTH(CURRENT_DATE)-1) THEN date END) IS NULL ) AS dt ``` `SUBDATE(CURRENT_DATE, DAYOFMONTH(CURRENT_DATE)-1)` = first day of current month `SUBDATE(first day of current month, interval 1 month)` = first day of previous month `LAST_DAY(CURRENT_DATE)` = end of current month
if you want to generify it, you can use `curdate()` to get current month, and `DATE_SUB(curdate(), INTERVAL 1 MONTH)` to get last month (you will need to do some if clause for January/December though): ``` SELECT count(distinct user_id) as users FROM transactions WHERE MONTH(date) = MONTH(DATE_SUB(curdate(), INTERVAL 1 MONTH)) AND transactions.status = 'COMPLETED' AND transactions.amount > 0 AND transactions.user_id NOT IN ( SELECT distinct user_id FROM transactions WHERE MONTH(date) = MONTH(curdate()) AND transactions.status = 'COMPLETED' AND transactions.amount > 0 ) ``` as far as efficiency goes, I don't see a problem with this one
SQL Query to find rows that didn't occur this month
[ "", "mysql", "sql", "" ]
I have 3 tables: [![enter image description here](https://i.stack.imgur.com/4CLmy.png)](https://i.stack.imgur.com/4CLmy.png) I have tried this so far: ``` SELECT T1.P_id as id, T2.D_id, COALESCE(T2.count , 0) AS count FROM ( SELECT DISTINCT P_id FROM Comp_prod WHERE company_id = '15' ) AS T1 LEFT JOIN ( SELECT D_id, P_id, COUNT(P_id) AS count FROM userlogs WHERE company_id = '15' AND date BETWEEN 'daterstart' AND 'dateend' GROUP BY D_id, P_id ) AS T2 ON T2.P_id = T1.P_id ``` this is not working because I need to connect to another table: > departments how can I have another sub queries? any solution? it MySQL I am using
Hope this helps ``` select sub.C_id,sub.D_id,sub.P_id,count(Userlogs.D_id) from (select Comp_prod.C_id,D_id,P_id from Comp_prod join Departments on Comp_prod.C_id = Departments.C_id ORDER by Comp_prod.C_id asc, Departments.D_id ASC, Comp_prod.P_id ASC) as sub left join Userlogs on sub.C_id = Userlogs.C_id and sub.D_id = Userlogs.D_id and sub.P_id = Userlogs.P_id group by sub.C_id,sub.D_id,sub.P_id ```
You can generate all rows, using `cross join` and then bring in the existing results (if any) for aggregation: ``` select d.c_id, d.D_id, p.P_id, count(ul.c_id) as cnt from comp_prod p cross join departments d left join user_logs ul on ul.c_id = d.c_id and ul.d_id = d.d_id and ul.p_id = p.p_id group by d.c_id, d.D_id, p.P_id; ```
Issue with SQL joining
[ "", "mysql", "sql", "" ]
I have four tables : 1. Batch(batch\_id, batch\_start\_date, batch\_strength, course\_id) 2. Course(course\_id, course\_name, course\_category, course\_fees, course\_duration) 3. Enrollment(batch\_id, student\_id, enrollment\_date) 4. Student(student\_id, student\_name, student\_mail, student\_date\_of\_birth, student\_city, student\_phone, student\_qualification) Now according to the question I have to display the coursename which has 2nd highest revenue collection According to the above question here is my solution ``` select c.course_name, c.course_fees*count(c.course_name) Total_Revenue from course c join(batch b join enrollment e on b.batch_id=e.batch_id) on c.course_id=b.course_id group by c.course_name, c.course_fees order by Total_Revenue desc ; ``` **Now the problem is I am not able to extract the second row from the resulting table from the above code. How can I extract second row from the above resulting table?(Answer specific to Oracle 11g is appreciated)**
Non-analytic solution just for the fun of it: ``` with r as ( select min(c.course_name) as course_name, min(c.course_fees) * count(*) as revenue from course c inner join batch b on b.course_id = c.course_id inner join enrollment e on e.batch_id = b.batch_id group by c.course_id ) select course_name, revenue from r where revenue = (select max(revenue) from r where revenue < (select max(revenue) from r)) ``` This handles ties (in both first and second place.) I'm also going out on a limb and assuming that you really intended to group by `course_id`. This looks more like a class exercise so I don't expect there's any complication like historical fee information or anything like that going on. **EDIT** Based on your comment below it sounds like you may have multiple courses with the same name and different fees. I suspect your original query will not work correctly since you are also grouping on `course_fees`. Notice the change below uses a grouping on `course_name` and a summation on `course_fees`: ``` with r as ( select course_name, sum(c.course_fees) as revenue from course c inner join batch b on b.course_id = c.course_id inner join enrollment e on e.batch_id = b.batch_id group by c.course_name ) select course_name, revenue from r where revenue = (select max(revenue) from r where revenue < (select max(revenue) from r)) ```
Use `row_number()`: ``` select * from (select c.course_name, c.course_fees*count(c.course_name) as Total_Revenue, row_number() over (order by c.course_fees*count(c.course_name)) as seqnum from batch b join enrollment e on b.batch_id = e.batch_id join course c on c.course_id=b.course_id group by c.course_name, c.course_fees ) bec where seqnum = 2; ``` If you could have ties for first and still want the second place, then use `dense_rank()` instead of `row_number()`.
How to extract second highest row from a table
[ "", "sql", "oracle", "oracle11g", "top-n", "" ]
I have table that looks like this: ``` | id | user | data | -------------------- | 1 | 11 | aaa1 | | 2 | 11 | aaa2 | | 3 | 11 | aaa3 | | 4 | 22 | aaa4 | | 5 | 33 | aaa5 | | 6 | 33 | aaa6 | | 7 | 44 | aaa7 | ``` I want to select all rows, with all data, and I want to add data with `max id` per `user`, that should look like this: ``` | id | user | data | f_id | f_data | ------------------------------------ | 1 | 11 | aaa1 | 3 | aaa3 | | 2 | 11 | aaa2 | 3 | aaa3 | | 3 | 11 | aaa3 | 3 | aaa3 | | 4 | 22 | aaa4 | 4 | aaa4 | | 5 | 33 | aaa5 | 6 | aaa6 | | 6 | 33 | aaa6 | 6 | aaa6 | | 7 | 44 | aaa7 | 7 | aaa7 | ``` this is my attempt of query: ``` SELECT t1.*, t2.id AS f_id, t2.data AS f_data FROM table1 t1 LEFT JOIN table1 t2 ON t1.user=(SELECT MAX(t2.id) FROM table1 t2 WHERE t2.user = t1.user ) ``` **Update:** All the answers are correct, but when I run the Query on table with 80K+ rows, MySQL needs a lot of time to Execute the query. For my project I will add ajax so user could click on it and php would execute query for one row per click.
**[SQL Fiddle Demo](http://sqlfiddle.com/#!9/a24413/6)** ``` SELECT t1.*, t2.id AS f_id, t2.data AS f_data FROM table1 t1 JOIN table1 t2 ON t1.user = t2.user AND t2.id = (SELECT MAX(t2.id) FROM table1 t3 WHERE t3.user = t1.user) ```
This will do it. ``` SELECT t1.*, t3.id AS f_id, concat(SUBSTRING(t1.data, 1, CHAR_LENGTH(t1.data)-1),t3.id) AS f_data FROM table1 t1 INNER JOIN (select max(t2.id) as id, t2.user as userid from table1 t2 group by t2.user) t3 on t3.userid = t1.user ``` Sqlfiddle : <http://sqlfiddle.com/#!9/a24413/7>
MySQL select all rows having latest value per id
[ "", "mysql", "sql", "join", "max", "" ]
I have two tables, t1 and t2, with identical columns(id, desc) and data. But one of the columns, desc, might have different data for the same primary key, id. I want to select all those rows from these two tables such that t1.desc != t2.desc ``` select a.id, b.desc FROM (SELECT * FROM t1 AS a UNION ALL SELECT * FROM t2 AS b) WHERE a.desc != b.desc ``` For example, if t1 has (1,'aaa') and (2,'bbb') and t2 has(1,'aaa') and (2,'bbb1') then the new table should have (2,'bbb') and (2,'bbb1') However, this does not seem to work. Please let me know where I am going wrong and what is the right way to do it right.
`UNION ALL` dumps all rows of the second part of the query *after* the rows produced by the first part of the query. You cannot compare `a`'s fields to `b`'s, because they belong to different rows. What you are probably trying to do is locating records of `t1` with `id`s matching these of `t2`, but different description. This can be achieved by a `JOIN`: ``` SELECT a.id, b.desc FROM t1 AS a JOIN t2 AS b ON a.id = b.id WHERE a.desc != b.desc ``` This way records of `t1` with IDs matching records of `t2` would end up on the same row of joined data, allowing you to do the comparison of descriptions for inequality. > I want both the rows to be selected is the descriptions are not equal You can use `UNION ALL` between two sets of rows obtained through join, with tables switching places, like this: ``` SELECT a.id, b.desc -- t1 is a, t2 is b FROM t1 AS a JOIN t2 AS b ON a.id = b.id WHERE a.desc != b.desc UNION ALL SELECT a.id, b.desc -- t1 is b, t2 is a FROM t2 AS a JOIN t1 AS b ON a.id = b.id WHERE a.desc != b.desc ```
`Union` is not going to compare the data.You need `Join` here ``` SELECT * FROM t1 AS a inner join t2 AS b on a.id =b.id and a.desc != b.desc ```
using where clause with Union
[ "", "sql", "oracle", "" ]
i am trying to delete e-mail duplicates from table nlt\_user this query is showing correctly records having duplicates: ``` select [e-mail], count([e-mail]) from nlt_user group by [e-mail] having count([e-mail]) > 1 ``` now how can i delete all records having duplicate but one? Thank you
Try this : ``` delete n1 from nlt_user n1 inner join nlt_user n2 on n1.e-mail=n2.e-mail and n1.id>n2.id; ``` This will keep record with minimum ID value of duplicates and deletes remaining duplicate records
If MySQL version is prior 5.7.4 you can add a `UNIQUE` index on the column e-mail with the `IGNORE` keyword. This will remove all the duplicate e-mail rows: ``` ALTER IGNORE TABLE nlt_user ADD UNIQUE INDEX idx_e-mail (e-mail); ``` If > 5.7.4 you can use a temporary table (`IGNORE` not possible on `ALTER` anymore): ``` CREATE TABLE nlt_user_new LIKE nlt_user; ALTER TABLE nlt_user_new ADD UNIQUE INDEX (emailaddress); INSERT IGNORE INTO nlt_user_new SELECT * FROM nlt_user; DROP TABLE nlt_user; RENAME TABLE nlt_user_new TO nlt_user; ```
Delet just one record of duplicate
[ "", "mysql", "sql", "" ]
I'm very new to SQL and learning. Currently I am working with a Select statement to only show orders from the current month. I am using ``` SELECT ORDERS.ORDERID, ORDERS.CUSTOMERID, ORDERS.ORDERDATE, ORDERS.SHIPDATE FROM ORDERS WHERE Date >= '01-JAN-16' and Date <= '31-JAN-16' ``` But that fails with Invalid Relational Operator Where am I going wrong? Thanks for any help!
If the field "Date" is really the name of your field, you will need to enclude it in tic marks as follows in your WHERE clause: ``` ... WHERE `Date` >= '01-JAN-16' and `Date` <= '31-JAN-16' ``` Because Date is a reserved word. It looks like ORDERDATE and SHIPDATE are your field names ... should you be using one of those in your query? Also, be careful with your date format. By default, SQL uses YYYY-MM-DD but it depends on how you have your date fields setup. Here's an example using the ORDREDATE field in your table ... ``` SELECT ORDERS.ORDERID,ORDERS.CUSTOMERID,ORDERS.ORDERDATE,ORDERS.SHIPDATE FROM ORDERS WHERE date(ORDERDATE) >= "2016-01-01" and date(ORDERDATE) <= "2016-01-31"; ```
It is caz' of your date format. Better way is the next query: ``` SELECT ORDERS.ORDERID, ORDERS.CUSTOMERID, ORDERS.ORDERDATE, ORDERS.SHIPDATE FROM ORDERS WHERE EXTRACT(month from date) = 1; --take records for january ```
SQL Select Query fails when trying to show results in current month
[ "", "sql", "oracle", "" ]
I want to use an id as a primary key for my table. In each record, I am also storing an id from an other source, but these ids are in no way sequential. Should I add an (auto-incremented) column with a "new" id? It is very important that queries by the id are as fast as possible. Some info: * The content of my table is only stored "temporary", The table gets often cleared (TRUNCATE) and than filled with new content. * It's a sql-server 2008 * After writing content to the table, I create an index for the id column Thanks!
As long as you are sure the supplied id's are unique, there's no need to create another (surrogate) id to use as primary key.
Under most circumstances, an index on the existing id should be sufficient. You can make it slightly faster by declaring it as a primary key. From what you describe a new id is not necessary for performance. If you do add one, the table will be slightly larger, which has a (very small) negative effect on performance. If the existing `id` is not numeric (or not an integer), then there might be a small gain from using a more efficient type for the index. But, your best bet is to make the existing `id` a primary key (although this might affect load performance). Note: I usually prefer synthetic primary keys, so this answer is very specific to your question.
Database: Should ids be sequential?
[ "", "sql", "sql-server", "database", "" ]
I have a reservation table like this ``` place day name1 floor1 name2 floor2 --------- -------- --------- --------- --------- --------- Anaheim Monday Sean 3rd Jenny 2nd Anaheim Monday Jenny 2nd Sean 3rd Cerritos Saturday Dennis 4th Sean 3rd ``` As you can see, first and second rows are just duplicate with different order. I want to have a final table like ``` place day name1 floor1 name2 floor2 --------- -------- --------- --------- --------- --------- Anaheim Monday Sean 3rd Jenny 2nd Cerritos Saturday Dennis 4th Sean 3rd ``` I was thinking about something like this ``` SELECT t1.place, t1.day, t1.name1, t1.floor, t1.name2, t1.floor2 FROM table t1, table t2 WHERE NOT (t1.place = t2.place AND t1.day = t2.day AND t1.name1 = t2.name2 AND t1.floor1 = t1.floor2); ``` But this didn't work :( Maybe my understanding is too short.. How should I approach this? **EDIT**: Okay, I should have mentioned this, but I'm looking for a general idea, not this table specific. There could be a table like ``` place day name1 floor1 section1 name2 floor2 section2 --------- -------- --------- --------- --------- --------- --------- ----------- Anaheim Monday Sean HR 12 Jenny QA 24 Anaheim Monday Jenny QA 24 Sean HR 12 Cerritos Saturday Dennis Main 31 Sean HR 12 ``` In this case, I can't use any of the "sorting" and remove mechanism.. However, thank you all for your effort to solve this!
First of all, if you want to compare `floor1` for more than 9 floors, then `2nd` would be greater than `11th`. To overcome this, you need to extract floor number from `floor1` and then compare it. Having said that, you can use this query if it is working for your db/version. `SQLFiddle Demo Fast version` . The generic version would be something like this. `SQLFiddle Demo Generic verison` ``` SELECT t.* FROM table1 t INNER JOIN (SELECT t1.place, t1.day, t1.name1 FROM table1 t1 INNER JOIN table1 t2 ON t1.place = t2.place AND t1.day = t2.day AND CAST(REPLACE(REPLACE(REPLACE(REPLACE(t1.floor1, 'st', ''), 'nd', ''), 'rd', ''), 'th', '') AS decimal) <= CAST(REPLACE(REPLACE(REPLACE(REPLACE(t2.floor1, 'st', ''), 'nd', ''), 'rd', ''), 'th', '') AS decimal) GROUP BY t1.place, t1.day, t1.floor1 HAVING COUNT(*) = 1) t3 ON t.place = t3.place AND t.day = t3.day AND t.name1 = t3.name1 ```
Use a clever `GROUP BY`: ``` SELECT t1.place, t1.day, t1.name1, t1.floor, t1.name2, t1.floor2 FROM table t1 INNER JOIN ( SELECT t1.place, t1.day, MAX(t1.floor, t1.floor2) AS floor1, MIN(t1.floor, t1.floor2) AS floor2 FROM table t1, table t2 GROUP BY t1.place, t1.day, MAX(t1.floor, t1.floor2), MIN(t1.floor, t1.floor2) ) t2 ON t1.place = t2.place AND t1.day = t2.day AND t1.floor = t2.floor1 AND t1.floor2 = t2.floor2 ```
Get rid of a duplicate row from SQL table
[ "", "sql", "sqlite", "" ]
Assume I have two tables as follows, Table A ``` +-------+-------+-------+-------+ | col_a | col_b | col_c | col_d | +-------+-------+-------+-------+ | | | | | +-------+-------+-------+-------+ ``` Table B ``` +-------+-------+-------+-------+ | col_a | col_b | col_c | col_d | +-------+-------+-------+-------+ | | | | | +-------+-------+-------+-------+ ``` I'm going to update Table A using Table B. Here are the conditions 1. records that are equal by `col_a` should update in Table A 2. records are not equal by `col_a` should inserted to Table A 3. Table A has a unique key constraint as (col\_b,col\_c,col\_d) Problem is when updating data in Table A, this unique key constraint fails for some records. Question is how can I identify records that violate unique key constraint using a query. (I don't have access to logs)
If you don't have a unique key on `col_b, col_c, col_d` of `table_b`, this will result in a violation when copying over. You can identify problematic rows with a query like this: ``` SELECT col_b, col_c, col_d FROM table_b GROUP BY col_b, col_c, col_d HAVING COUNT(*) > 1 ``` A similar query can be run on `table_a` joined to `table_b`, but the specific queries to run will depend on which columns will be updated in `table_a`. For the insert case, a useful technique might be to use a `MINUS` between `table_a` and the proposed inserted row.
If I correctly understand your need, maybe something like this can find the rows that will give problems: ``` select * from table_a a inner join table_b b on (a.col_b = b.col_b and a.col_c = b.col_c and a.col_d = b.col_d and a.col_a != b.col_a ) ```
How to find the record that violate unique key constraint?
[ "", "sql", "oracle", "" ]
Sorry for the title but that was the best way I could think to explain it... Here is my scenario I have a table that stores rows for communications with my clients. Something like this ``` | UserID | CommunicationID ----------------------------- | User1 | com1 | User1 | com2 | User1 | com3 | User2 | com1 | User2 | com2 | User3 | com1 ``` What I am looking for is a query that returns the UserID only for the Users who have received ALL 3 communications. So in the example above, only User1 would be returned. I forgot to mention that I only need records where the User specifically received Com1 & Com2 & Com3. Regardless of how many times they received any one com, they have to had received all 3. This problem came about because some users have mistakenly received com1 many times and it's thrown of my queries to identify people properly. I had a stab at it myself trying CommunicationID IN( Com1, Com2, Com3) but of course that returns everyone. I though also about joining the table back to itself but not 100% sure how that would work. Any sql gurus out there I would love your advice. Thanks
Try this: ``` SELECT UserID FROM mytable WHERE CommunicationID IN ('Com1', 'Com2', 'Com3') GROUP BY UserID HAVING COUNT(DISTINCT CommunicationID) = 3 ``` The above query selects the `UserID` values of all users having *all three* `CommunicationID` values, *irrespective* of how many times each distinct `CommunicationID` value appears in the table.
This approach uses `WHERE` clauses to treat it as three different elements, joining them to each other. At the end of the join, only those that have survived all three hops will be left in your result set. ``` SELECT DerivedCom3.UserID FROM ( SELECT DISTINCT UserID FROM YourTable WHERE CommunicationID = 'com1' ) DerivedCom1 LEFT OUTER JOIN ( SELECT DISTINCT UserID FROM YourTable WHERE CommunicationID = 'com2' ) DerivedCom2 ON DerivedCom2.UserID = DerivedCom1.UserID LEFT OUTER JOIN ( SELECT DISTINCT UserID FROM YourTable WHERE CommunicationID = 'com3' ) DerivedCom3 ON DerivedCom2.UserID = DerivedCom3.UserID WHERE DerivedCom3.UserID IS NOT NULL ```
sql, return rows when multiple rows have specific values?
[ "", "sql", "join", "" ]
I am using Postgresql, I created two subqueries that return results as follows: ``` firm_id type_1 fee_1 1 2 100 2 4 300 5 1 100 firm_id type_2 fee_2 1 3 200 2 3 200 3 2 150 4 5 300 ``` I would like to yield a result as: ``` firm_id type_1 type_2 total_fee 1 2 3 300 2 4 3 500 3 0 2 150 4 0 5 300 5 1 0 100 ``` Any helps appreciated!
``` SELECT coalesce( t1."firm_id", t2."firm_id" ) as firm_id, coalesce( t1."type_1", 0 ) as type_1, coalesce( t2."type_2", 0 ) as type_2, coalesce( t1."fee_1", 0 ) + coalesce( t2."fee_2", 0 ) as total_fee FROM table1 t1 FULL JOIN table2 t2 ON t1."firm_id" = t2."firm_id" ``` where table1 and table2 must be replaced by your subqueries see a demo: <http://sqlfiddle.com/#!15/6d391/2>
Use `FULL JOIN` and `coalesce()`: ``` with q1(firm_id, type_1, fee_1) as ( values (1, 2, 100), (2, 4, 300), (5, 1, 100)), q2 (firm_id, type_2, fee_2) as ( values (1, 3, 200), (2, 3, 200), (3, 2, 150), (4, 5, 300)) select firm_id, coalesce(type_1, 0) type_1, coalesce(type_2, 0) type_2, coalesce(fee_1, 0)+ coalesce(fee_2, 0) total_fee from q1 full join q2 using (firm_id); firm_id | type_1 | type_2 | total_fee ---------+--------+--------+----------- 1 | 2 | 3 | 300 2 | 4 | 3 | 500 3 | 0 | 2 | 150 4 | 0 | 5 | 300 5 | 1 | 0 | 100 (5 rows) ```
Merging results from two subqueries Postgresql
[ "", "sql", "postgresql", "" ]
Suppose I have something like ``` CREATE TABLE Foo ( id INT NOT NULL PRIMARY KEY IDENTITY(1,1), . . . ); CREATE TABLE Bar ( . . foo_id INT REFERENCES Foo(id), . . ); ``` Do/should I instead have ``` foo_id INT NOT NULL REFERENCES Foo(id) ``` or is that implicit in the fact that the value it's referencing is guaranteed to be `NOT NULL`?
The foreign key column on `Bar` can be `NULL`, if it's possible that a `Bar` record may exist without that value. > the fact that the value it's referencing is guaranteed to be NOT NULL If the value in `Bar` is `NULL` then it's simply not referencing a record in `Foo`. It *can* reference a record, but doesn't have to if it allows `NULL`.
No, it's not necessary and it's by purpose. You can still put a `NULL` value into foreign key and it means that it actually doesn't reference anything.
If a primary key is NOT NULL, then should a foreign key that references it be specified also as NOT NULL?
[ "", "sql", "sql-server", "t-sql", "database-design", "" ]
I'm trying to get the total of votes for each FANOF\_ID (ex: Me). The problem is that a FAN can vote each day for the same FANOF\_ID (ex: David Bowie RIP) So each day I could vote for David Bowie as my favorite singer ``` ID CREATED FAN_ID FANOF_ID 15 2016-01-24 3 3 16 2016-01-25 3 3 17 2016-01-25 2 3 ``` So from that example I should get a result of 2 fans for 'total' for FANOF\_ID (3) This is my actual SQL ``` SELECT distinct `fans_fanofvote`.`fan_id`, COUNT(`fans_fanofvote`.`fanof_id`) AS `total` FROM `fans_fanofvote` GROUP BY `fans_fanofvote`.`fanof_id` ORDER BY `total` DESC ``` But it returns 3 records even if I use distinct on fan\_id it wont work. How can I get mySQL to do a distinct on FAN\_ID My SQL should return one record like that: ``` FANOF_ID TOTAL 3 2 ```
You want `COUNT(DISTINCT)`. However, you have to be careful about what you are counting (`fan_id`) and what you are aggregating by `(fanof_id`): ``` SELECT fov.fanof_id, COUNT(DISTINCT fov.fan_id) AS total FROM fans_fanofvote fov GROUP BY fov.fanof_id ORDER BY total DESC; ``` Note that table aliases make the query easier to read. And don't use back tick unless really needed.
You didn't mention expected output earlier so it was confusing. ``` SELECT `fans_fanofvote`.`fanof_id`, COUNT(`fans_fanofvote`.`fan_id`) AS `total` FROM `fans_fanofvote` GROUP BY `fans_fanofvote`.`fanof_id` ORDER BY `total` DESC ```
mySQL statement get last record
[ "", "mysql", "sql", "" ]
Here I have a list of ids provided by one SELECT, and one simple insert example ``` SELECT idUnit FROM dbo.GetUnitsTree(10) INSERT INTO [Profession_Ext] ([FK_Personnal] ,[FK_Profession]) VALUES (50 ,<FK_Profession, int,>) ``` I want the same result that this, but dynamically (I want to script this) ``` INSERT INTO [Profession_Ext] ([FK_Personnal] ,[FK_Profession]) VALUES (50,1),(50,2),(50,3),(50,4),(50,5),(50,6),(50,7),(50,8) ``` Where 50 is a fixed value. How to do?
> Here I have a dynamic list of int (provided by a SELECT) I would either insert directly from the original `SELECT` if you don't need the results for anything other than the INSERT: ``` INSERT INTO Profession_Ext ([FK_Personnal] ,[FK_Profession]) SELECT {select list here} ``` or change your SELECT to populate a temp table and then pull from that: ``` SELECT {select list here} INTO @tmpResults INSERT INTO Profession_Ext ([FK_Personnal] ,[FK_Profession]) SELECT 50, FK_Profession FROM @tmpResults ```
Write following one Bulk Insertion with Select statement: ``` INSERT INTO [Profession_Ext] ([FK_Personnal] ,[FK_Profession]) SELECT 50 AS [FK_Personnal], tbl.[FK_Profession] FROM ( SELECT 1 AS FK_Profession UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 ) AS tbl ```
Multiple insert with one list of int
[ "", "sql", "sql-server-2008", "t-sql", "" ]
I have a table like below ``` BILL BILL_DATE TYPE AMOUNT BILL01300002 2016-01-30 15:34:48.957 1 250.00 BILL01300003 2016-01-30 15:38:00.000 5 150.00 BILL01300004 2016-01-30 15:49:48.000 5 200.00 BILL01300002 2016-01-30 16:37:00.433 2 250.00 BILL01300004 2016-01-30 17:37:00.233 7 1000.00 BILL01300002 2016-01-30 18:42:54.927 3 250.00 BILL02020006 2016-02-02 18:42:54.927 3 550.00 ``` And I need to show one extra column [AMOUNT2] based on one case. The case is if the same BILL comes in a same day for three particular TYPE in (1,2,3). Then i need to show the extra column with same amount as TYPE 3 Amount column for those 3 records. other wise the extra column will be 0. In the #TAB we can find this case in BILL01300002. It has came in a same day. For 3 particular different TYPES 1,2,3. My expected Result is ``` BILL BILL_DATE TYPE AMOUNT AMOUNT2 BILL01300002 2016-01-30 15:34:48.957 1 250.00 250.00 BILL01300003 2016-01-30 15:38:00.000 5 150.00 0 BILL01300004 2016-01-30 15:49:48.000 5 200.00 0 BILL01300002 2016-01-30 16:37:00.433 2 250.00 250.00 BILL01300004 2016-01-30 17:37:00.233 7 1000.00 0 BILL01300002 2016-01-30 18:42:54.927 3 250.00 250.00 BILL02020006 2016-02-02 18:42:54.927 3 550.00 0 ``` I am posting schema for reference ``` CREATE TABLE #TAB(BILL VARCHAR(250),BILL_DATE DATETIME, TYPE BIGINT, AMOUNT DECIMAL(18,2) ) INSERT INTO #TAB SELECT 'BILL01300002','2016-01-30 15:34:48.957',1, 250 UNION ALL SELECT 'BILL01300003','2016-01-30 15:38:00.000',5, 150 UNION ALL SELECT 'BILL01300004','2016-01-30 15:49:48.000',5, 200 UNION ALL SELECT 'BILL01300002','2016-01-30 16:37:00.433',2, 250 UNION ALL SELECT 'BILL01300004','2016-01-30 17:37:00.233',7, 1000 UNION ALL SELECT 'BILL01300002','2016-01-30 18:42:54.927',3, 250 UNION ALL SELECT 'BILL02020006','2016-02-02 18:42:54.927',3, 550 ```
``` SELECT *, ( CASE B.BILL WHEN (SELECT A.BILL FROM #TAB AS A WHERE TYPE IN( 1, 2, 3 ) GROUP BY A.BILL HAVING Count(A.BILL) >= 3) THEN amount ELSE 0 END ) AS amount2 FROM #TAB AS B ``` :) Hoping to help you
You can try like this, ``` SELECT *, CASE WHEN ( Count(bill) OVER ( partition BY bill, CONVERT(DATE, bill_date)) ) > 1 AND type IN ( 1, 2, 3 ) THEN coalesce((SELECT amount FROM #TAB WHERE type = 3 AND bill = t.bill),0) ELSE 0 END AS Amount2 FROM #TAB t ```
How to achive the below scenario in T-SQL
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2012", "" ]
So I have a table of items. Each item has a status that can be "daily", "monthly", "yearly", or "outstanding". What I'm trying to do is create a single Activerecord (or SQL) query that arranges the outstanding items first (by created\_at) and then the rest of the items (regardless of their status) by their created\_at date, while limiting the total number of items returned to 15. So for instance, if I have 30 outstanding items and 30 yearly items, the query returns 15 outstanding items (by their created\_at). If I have 10 outstanding items and 30 yearly items, it returns those 10 outstanding items (by created\_at) and then 5 yearly items (by created\_at) β€”Β the outstanding items returned should be at the beginning of the returned array. If I do not have any outstanding items and 30 yearly items, it would return 15 yearly items by their created\_at date. Thank you for the help!
Firstly, just define a scope like this: ``` class Item < ActiveRecord::Base scope :ordered, -> { joins("LEFT OUTER JOIN ( SELECT id, created_at, status FROM items WHERE items.status = 'outstanding' ) AS temp ON temp.id = items.id AND temp.status = items.status" ).order('temp.created_at NULLS LAST, items.status, items.created_at') } end ``` The magic is: (expect your table name is `items`) * Left join `items` with `outstanding items`. So `temp.id` and `temp.created_at` will be `NULL` for items which don't have status `outstanding` * Order by `temp.created_at NULLS LAST` first, so the `items` which don't have status `outstanding` will be ordered last. Then just do order by normally: `items.status`(this makes the same statuses will be closed by each other) and `items.created_at` You can run the query with scope `ordered` for 15 items only: ``` Item.ordered.limit(15) ```
Though it's not a single query in worst case scenario (2 queries), but it solves your problem: ``` items = Item.where(:status => "outstanding").order('created_at DESC').limit(15) items = (items.size == 15) ? items : items + Item.where('status != ?', "outstanding").order('created_at DESC').limit(15-items.size) ```
Activerecord sort by column and then created_at?
[ "", "sql", "ruby-on-rails", "activerecord", "" ]
Ok so now i have a problem which is when i select the data from table i want to count data and select it the data in this table consist of ``` | Name | Fruits | | Ben | Apple | | Tim | Orange | | Tim | Pear | | Tim | Grape | ``` So here's how want it to be viewed ``` | Name | Fruits Number | | Ben | 1 | | Tim | 3 Have too many fruits| ``` this is how i want it to show but when i run if else statement it show like this ``` | Name | Fruits Number | | Ben | 1 Have too many fruits| | Tim | 3 Have too many fruits| ``` Please help me with this simple problem ``` if((select count(fruits) from fruits_table )>1)(select name , (count(Fruits) + 'Have too many fruits') as'Fruits number' from Fruits Table group by Name) else (select name , (count(Fruits) + ) as'Fruits number' from Fruits Table group by Name) ```
``` SELECT Name, CASE WHEN Count(*) > 2 THEN CONCAT(COUNT(*), ' Have too many fruits') ELSE CAST(COUNT(*) As CHAR) END AS 'Fruits Number' FROM MyTable Group By Name ``` In your fruits example a case should work. Of course you can base it on a sub query and reduce the `COUNT()`'s
This does exactly what you asked for: ``` SELECT name, IF (fruitcount > 1, concat(fruitcount, ' Have too many fruits'), cast(fruitcount as char)) AS 'Fruits number' FROM ( SELECT name, Count(*) as fruitcount FROM fruits_table GROUP BY Name ) AS z; ``` But this is probably a better way to do it, since it gives you a separate column for the "too many fruits" signal. ``` SELECT name, fruitcount, IF (fruitcount > 1, ' Yes', '' ) as 'Too many?' FROM ( SELECT name, Count(*) as fruitcount FROM fruits_table GROUP BY Name ) AS z; ```
How to select and check values of indiviual rows and add text to it data SQL
[ "", "mysql", "sql", "select", "" ]
I am passing a simple query where I am searching for specific rows where `OrderID` is an even number ``` SELECT * FROM Orders WHERE mod(OrderID,2) = 0; ``` Error : > Syntax error (missing operator) in query expression 'mod(OrderID,2) = 0'.
You are not using Oracle, so you should be using the modulus operator: ``` SELECT * FROM Orders where OrderID % 2 = 0; ``` The `MOD()` function exists in Oracle, which is the source of your confusion. Have a look at [this SO question](https://stackoverflow.com/questions/3756928/select-row-if-the-value-2-1-mod) which discusses your problem.
``` SELECT * FROM Orders where OrderID % 2 = 0;///this is for even numbers SELECT * FROM Orders where OrderID % 2 != 0;///this is for odd numbers ```
Select Rows with id having even number
[ "", "sql", "sql-server", "sql-function", "" ]
I have a view and a table that I'm trying to get data out of. My view has basic order information based on an order number. The table I'm getting data out of has very detailed pieces of data based on an order number. When I run the view by itself, I get basic order header information. When I query the table, I get bill of material level details. I only need one column of data from the table, to be added to the result set of the view. I currently have: ``` SELECT x.ORDER_NUMBER, Description, Price, Quantity, x.ORDER_NUMBER + '-' + y.Shipment AS [Ship_Set] FROM vw_OrderData AS x JOIN DetailData AS y ON (x.ORDER_NUMBER = y.ORDER_NUMBER) ``` I'm trying to get the result set from `vw_OrderData`, then grab and concatenate the `Shipment` number from the `DetailData` table, as that column doesn't exist in the `vw_OrderData`. What I'm getting is every row from the `DetailData` table that has a matching `ORDER_NUMBER`. I'm fairly certain I'm not joining correctly, but I'm still learning SQL. Appreciated.
I attempted the following: ``` SELECT o.ORDER_NUMBER, Description, Price, Quantity, o.ORDER_NUMBER + '-' + COALESCE(dd.Shippment, '') AS [Ship_Set] FROM vw_OrderData o LEFT JOIN DetailData dd ON o.ORDER_NUMBER = dd.ORDER_NUMBER; ``` It returned the same amount of rows because there were multiple line items per order number. I did not try the TOP 1 solution, but rather went back into my OrderData view, and saw that it was already pulling from the DetailData table for a couple other rows, so I simply added: ``` DetailData.OrderNumber+'-'+COALESCE(DetailData.Shippment,'00') as [Ship_Set] ``` as a column to the result set of the view, then just added the column to the original select statement in question without having to do the join: ``` SELECT ORDER_NUMBER, Description, Price, Quantity, [Ship_Set] FROM vw_OrderData ```
If it gives you multiple rows, but only need the first one, you could use the `SELECT TOP(1)` option such as : ``` SELECT TOP(1) x.ORDER_NUMBER, Description, Price, Quantity, x.ORDER_NUMBER+'-'+y.Shippment AS [Ship_Set] FROM vw_OrderData AS x join DetailData AS y ON(x.ORDER_NUMBER = y.ORDER_NUMBER) ``` Therefore your query works exactly as depicted. You should probably instead refine it, what do you exactly need ? why every row from the DetailData table that has a matching ORDER\_NUMBER is too much ? do you need more specific results ?
Grab one row from a table
[ "", "sql", "sql-server", "sql-server-2012", "left-join", "outer-join", "" ]
I have a table like this : ``` id product_property_id product_id amount type 1 1 145 10 0 2 4 145 12 0 3 6 145 13 1 4 23 147 2 0 5 4 145 15 1 6 4 145 2 1 ``` `type` : 0: out 1: in what I want : for example : `product_id` **145** with `product_property_id` **4** : (15+2) - 12 = 5 ``` product_id product_property_id new_amout 145 4 5 ``` is it possible to use `SQL` to get this result or I have to use `php` instant of it ?
If I understand correctly, this is just a query using conditional aggregation: ``` select product_id, product_property_id, sum(case when type = 1 then amount when type = 0 then - amount else 0 end) as new_amount from likethis group by product_id, product_property_id; ```
This should work: ``` SELECT product_id, product_property_id, SUM(CASE WHEN [type] = 1 THEN amount END) - SUM(CASE WHEN [type] = 0 THEN amount END) new_amount FROM dbo.YourTable WHERE product_id = 145 AND product_property_id = 4 GROUP BY product_id, product_property_id; ```
Minus/Sum a table with itself
[ "", "mysql", "sql", "" ]
So I have a query that looks something like this: ``` SELECT IncidentNumber, ToiletType, ToiletDangers, IncidentDate FROM Core.LostLawsuits ``` ...which returns the following results sort of like this: ``` +----------------+------------+---------------------------+---------------+ | IncidentNumber | ToiletType | ToiletDangers | Incident Date | +----------------+------------+---------------------------+---------------+ | 2100 | A | Spontaneous Combustion | 2/1/2016 | +----------------+------------+---------------------------+---------------+ | 2101 | P | Attracts Bear Stampede | 2/1/2016 | +----------------+------------+---------------------------+---------------+ ``` What I'd like to do is get the results, but change the ToiletType column result when outputting it. Currently, I am using a nested `REPLACE()`, and would like to know if there's a better / different way of doing this, **while maintaining a one column result for `ToiletType`**: ``` SELECT IncidentNumber, REPLACE(REPLACE(ToiletType, 'A', 'Automatic Standard'), 'P', 'Portable Potty') as ToiletType, ToiletDangers, IncidentDate FROM Core.LostLawsuits ```
A CASE expression would work for your example: ``` case ToiletType when 'A' then 'Automatic Standard' when 'P' then 'Portable Potty' end ``` i.e. ``` SELECT IncidentNumber, case ToiletType when 'A' then 'Automatic Standard' when 'P' then 'Portable Potty' end as ToiletType, ToiletDangers, IncidentDate FROM Core.LostLawsuits ``` Perhaps better though would be to join to a ToiletTypes table?
In this case, I think you want `CASE`: ``` SELECT IncidentNumber, (CASE ToiletType WHEN 'A' THEN 'Automatic Standard' WHEN 'P' THEN 'Portable Potty' ELSE ToiletType END) as ToiletType, ToiletDangers, IncidentDate FROM Core.LostLawsuits; ```
Oracle SQL: Alternative to nested REPLACE() in Select statement?
[ "", "sql", "oracle", "" ]
I have gone through lot of sites to understand normalization but it seems like I have to memorize the rules. After a week or later I forget it and seems like every thing is new for me. Is any better way to understand the first three normal forms. I need some easy practical examples to understand and remember it always. For now I know, ``` 1st Normal Form - Remove Repeating groups (should be Atomic). 2nd Normal Form - Remove partial Dependency. 3rd Normal Form - Remove transitive Dependency. ```
There is a good explanation [on Dream in Code](http://www.dreamincode.net/forums/topic/179103-relational-database-design-normalization/). For examples, try remembering the most common violations: 1. First and last name in a single field 2. Storing name and price of an item along with the order for that item 3. Storing an address with a customer record
The first nomal form should be clear. **You have atomic values and create redundancy in your table**. At this point it means: Your current primary keys are not longer unique. The 2NF means that you should elimnate redundant data. The attributes **need to be functional dependent to your primary key and not to other non-prime attributes**. If they are not dependent to your primary key, your table isn't in the second NF. The 3NF is imho a bit more difficult. I just read about an other definition of it. So usually you have: Your attributes must not be dependent transitively to your primary key. **The transition means, that your attribute shouldn't refer to your primary key through an other attribute.** Let's have a look at an example: ``` | CourseNr | Course Name | ProfNr| ProfName | --------------------------------------------------------- 10 | Analysis | 90 | A | 12 | Math | 90 | A | 16 | Databases | 65 | D | 17 | Literature | 68 | F | --------------------------------------------------------- ``` But now you have a transitive dependency from ProfName to CourseName, because your ProfName depends "over" ProfNr to CourseNr. Your prof's name isn't necessary for your table and you have redundancy. Now you can extract that to: ``` | ProfNr | ProfName | --------------------- | 90 | A | | 65 | D | | 68 | F | --------------------- ``` And now you can delete the ProfName row in your first table. Your redundancyhas been eliminated and you can link your prof via the ProfNr. And your first table looks like this now: ``` | CourseNr | Course Name | ProfNr | ---------------------------------------------- 10 | Analysis | 90 | 12 | Math | 90 | 16 | Databases | 65 | 17 | Literature | 68 | ---------------------------------------------- ``` For me: I have always an easy example in my head for the third NF, because it's for me the NF I had problems with.
How to easily understand normalization - Database
[ "", "sql", "database", "" ]
How can I exclude in sql a datetime and the associated team: IΒ΄ve tried this here: ``` IntervalDate not like ('2015-01-10 00:00:00' + 'Team C') ```
you have split the condition as :- ``` IntervalDate not like '2015-01-10 00:00:00' and Team not like 'Team C') ```
As per other comments. I suspect that the "Team C" value resides in a separate column to `IntervalDate` and that `IntervalDate` is an SQL `datetime` (or similar) column. If you can show us your table schema we can suggest what is correct, however it is going to be something like: ``` WHERE (IntervalDate NOT LIKE '2015-01-01' AND Team = 'Team C') ``` However, I am making assumptions because you've not shown us your table schema.
Exclude a datetime and the associated team sql
[ "", "sql", "sql-server", "" ]
I'm playing around on <https://www.documentdb.com/sql/demo>, which allows me to query against sample documents that look like: ``` { "id": "19015", "description": "Snacks, granola bars, hard, plain", "tags": [ { "name": "snacks" } ], "version": 1, "isFromSurvey": false, "foodGroup": "Snacks", "servings": [ { "amount": 1, "description": "bar", "weightInGrams": 21 } ] } ``` I'm confused about `ARRAY_CONTAINS()`. This query returns results: ``` SELECT root FROM root WHERE ARRAY_CONTAINS(root.tags, { "name": "snacks" }) ``` However, this query does not: ``` SELECT root FROM root WHERE ARRAY_CONTAINS(root.servings, { "description": "bar" }) ``` What gives? What I'm trying to achieve is illustrated by how I would write the query if this was C#: ``` var filteredDocs = docs.Where(d => d.Servings != null && d.Servings.Length > 0 && d.Servings.Any(s => s.Description == "bar")); ``` It appears the first example query on `root.tags` works because `{ "name": "snacks" }` is the *entire* object in the `root.tags` array, while, in the second query, `{ "description": "bar" }` is only *one field* in the `root.servings` objects. How can I modify the second query on `root.servings` to work with only knowing the serving `description`?
EDIT: ARRAY\_CONTAINS now supports partial match as Jim Scott points out below, which I think is a better answer than this accepted one. You `servings` array only has one entry `{"amount": 1, "description": "bar", "weightInGrams": 21}`. This should work for your example with a single serving: ``` SELECT root FROM root WHERE root.servings[0].description = "bar" ``` But it sounds like that's not what you are looking for. So, assuming you have this: ``` { ... "servings": [ {"description": "baz", ....}, {"description": "bar", ....}, {"description": "bejeweled", ....} ], ... } ``` And you want to find the documents where one of the servings has the description "bar", then you could use this UDF: ``` function(servings, description) { var s, _i, _len; for (_i = 0, _len = servings.length; _i < _len; _i++) { s = servings[_i]; if (s.description === description) { return true; } } return false; } ``` With this query: ``` SELECT * FROM c WHERE udf.findServingsByDescription(c.servings, "bar") ```
Not sure if this functionality was available when you were looking at the API originally but the [ARRAY\_CONTAINS](https://learn.microsoft.com/en-us/azure/cosmos-db/sql-query-array-contains) now supports an optional Boolean value at the end to provide partial match support. ``` SELECT root FROM root WHERE ARRAY_CONTAINS(root.servings, { "description": "bar" }, true) ``` Here is sample using the [documentdb demo site](https://www.documentdb.com/sql/demo#Sandbox?SELECT%20%09food%0AFROM%20%09food%0AWHERE%20%09ARRAY_CONTAINS(food.nutrients,%20%7B%0A%20%20%20%20%22id%22:%20%22612%22,%0A%20%20%20%20%22description%22:%20%2214:0%22,%0A%20%20%20%20%22nutritionValue%22:%200.088,%0A%20%20%20%20%22units%22:%20%22g%22%0A%20%20%7D)) that queries an array that contains multiple fields for each object stored. ``` SELECT * FROM food as f WHERE ARRAY_CONTAINS(f.servings, {"description":"bar"}, true) ```
DocumentDB SQL with ARRAY_CONTAINS
[ "", "sql", "azure-cosmosdb", "" ]
Can anyone please help me with the query to list all the tables in all the databases in a single server?. Thanks in advance.
AS stated [here](https://dba.stackexchange.com/a/61855/33311) by Try with dynamic query You need a query for each database against `sys.tables`. ``` select 'master' as DatabaseName, T.name collate database_default as TableName from master.sys.tables as T union all select 'tempdb' as DatabaseName, T.name collate database_default as TableName from tempdb.sys.tables as T union all select 'model' as DatabaseName, T.name collate database_default as TableName from model.sys.tables as T union all select 'msdb' as DatabaseName, T.name collate database_default as TableName from msdb.sys.tables as T ``` You can use `sys.databases` to build and execute the query dynamically. ``` declare @SQL nvarchar(max) set @SQL = (select 'union all select '''+D.name+''' as DatabaseName, T.name collate database_default as TableName from '+quotename(D.name)+'.sys.tables as T ' from sys.databases as D for xml path(''), type).value('substring((./text())[1], 13)', 'nvarchar(max)') --print @SQL exec (@SQL) ```
Use `sp_MSforeachdb` to iterate through all databases and `INFORMATION_SCHEMA.TABLES` to query the tables in each: ``` sp_MSforeachdb 'USE [?]; SELECT * FROM INFORMATION_SCHEMA.TABLES' ```
To view all the tables in a database server
[ "", "sql", "sql-server", "" ]
I want to find out the count of people who have rented from 2 stores. THe schema is as below. ``` Rental table ---------------------------------------------------- rental_id int(11) NO PRI rental_date datetime NO MUL inventory_id mediumint(8) unsigned NO MUL customer_id smallint(5) unsigned NO MUL return_date datetime YES staff_id tinyint(3) unsigned NO MUL last_update timestamp NO CURRENT_TIMESTAMP Inventory table --------------------------------------------------- inventory_id mediumint(8) unsigned NO PRI film_id smallint(5) unsigned NO MUL store_id tinyint(3) unsigned NO MUL last_update timestamp NO CURRENT_TIMESTAMP ``` With this query I can get the list of customers who rented from 2 stores. ``` select customer_id from rental r , inventory i where r.inventory_id = i.inventory_id group by customer_id having count(distinct i.store_id) = 2; ``` How to find the count as a part of the same sql statement?
``` select count(*) from( select customer_id from rental r , inventory i where r.inventory_id = i.inventory_id group by customer_id having count(distinct i.store_id) = 2); ``` You will have to aggregate you results in the inner query for that matter, So u'll have to rap the query with another one that will count all the results.
You would use two aggregations. The first gets the people who have rented from two stores. The second adds them up. So: ``` select count(*) from (select r.customer_id from rental r join inventory i on r.inventory_id = i.inventory_id group by r.customer_id having count(distinct i.store_id) = 2 ) r; ```
MySql finding the total count after a group by
[ "", "mysql", "sql", "group-by", "" ]
Suppose we have: ``` CREATE TABLE #Users(id INT PRIMARY KEY, name VARCHAR(100), suggestions XML); INSERT INTO #Users(id, name, suggestions) SELECT 1, 'Bob', N'<Products> <Product id="1" score="1"/> <Product id="2" score="5"/> <Product id="3" score="4"/> </Products>' UNION ALL SELECT 2, 'Jimmy', N'<Products> <Product id="6" score="3"/> </Products>'; DECLARE @userId INT = 1, @suggestions XML = N'<Products> <Product id="2" score="5"/> <Product id="3" score="2"/> <Product id="7" score="1" /> </Products>'; ``` `Playground` Now I want to merge 2 XMLs based on `id` attribute: Final result for user with id = 1: ``` <Products> <Product id="1" score="1"/> -- nothing changed (but not exists in @suggestions) <Product id="2" score="5"/> -- nothing changed (but exists in @suggestions) <Product id="3" score="2"/> -- update score to 2 <Product id="7" score="1"/> -- insert new element </Products> ``` Please note that it is not combining 2 XMLs but "upsert" operation. **Remarks:** * I know that this kind of schema violates database normalization and normalizing it is the way to go (but not in this case) * I know solution that utilize derived tables, `.nodes()` and `.value()` functions first to parse both XML, then merge and write back **I am searching for is `XPath/XQuery` expression that will merge it in one statement (no derived tables/dynamic-sql\*):** \* If absolutely needed, Dynamic SQL could be used, but I want to avoid it. ``` UPDATE #Users SET suggestions.modify(... sql:variable("@suggestions") ...); --changes only here WHERE id = @userId; /* replace ... for ... where ... with sql:variable */ ```
After trying around a while I think this is not possible... There is similar question here: [XQuery adding or replacing attribute in single SQL update command](https://stackoverflow.com/q/3854308/5089204) The `.modify(insert Expression1 ... )` does not allow to get data **within** an XML passed in via `@sql:variable()` or `sql:column()` Read here: <https://msdn.microsoft.com/en-us/library/ms175466.aspx> at Expression1 -> "constant XML or stand alone sql:column / sql:variable or XQuery (to the same instance) ``` DECLARE @xml1 XML= --the existing XML '<Products> <Product id="1" score="1" /> <Product id="2" score="5" /> <Product id="3" score="4" /> </Products>'; DECLARE @xml2 XML= --the XML with new or changed data '<Products> <Product id="2" score="5" /> <Product id="3" score="2" /> <Product id="7" score="1" /> </Products>'; SET @xml1.modify('insert sql:variable("@xml2") as first into /Products[1]'); SELECT @xml1; /* The full node is inserted! Without any kind of preparation there is NO CHANCE to get the inner nodes only <Products> <Products> <Product id="2" score="5" /> <Product id="3" score="2" /> <Product id="7" score="1" /> </Products> <Product id="1" score="1" /> <Product id="2" score="5" /> <Product id="3" score="4" /> </Products> */ ``` You might declare the second XML as such: ``` DECLARE @xml2 XML= --the XML with new or changed data '<Product id="2" score="5" /> <Product id="3" score="2" /> <Product id="7" score="1" />'; ``` But than you'll have no chance to use the id's value as XQuery filter ``` SET @xml1.modify('insert sql:variable("@xml2") as first into /Products[**How should one filter here?**]'); ``` And last but not least I think there is no chance to combine two different XML\_DML statements within one call of `.modify()`. The only idea I had was this, but it doesn't work. IF seems to be usable only **within** an Expression, but not two distinguish between two execution paths ``` SET @xml1.modify('if (1=1) then insert sql:variable("@xml2") as first into /Products[1] else replace value of /Products[1]/Product[@id=1][1]/@score with 100'); ``` So my conclusion: No, this is not possible... The solution I provided here <https://stackoverflow.com/a/35060150/5089204> in the second section ("If you want to 'merge' two Books-structures") would be my way to solve this.
Try something like this,it is easy to understand ,but it is lengthy. Let me know,if you have any problem. ``` declare @Users TABLE(id INT PRIMARY KEY, name VARCHAR(100), suggestions XML); INSERT INTO @Users(id, name, suggestions) SELECT 1, 'Bob', N'<Products> <Product id="1" score="1"/> <Product id="2" score="5"/> <Product id="3" score="4"/> </Products>' UNION ALL SELECT 2, 'Jimmy', N'<Products> <Product id="6" score="3"/> </Products>'; DECLARE @userId INT = 1, @suggestions XML = N'<Products> <Product id="2" score="5"/> <Product id="3" score="2"/> <Product id="7" score="1" /> </Products>'; declare @Users1 TABLE(userid INT , productid int,score int); insert into @Users1 SELECT id userid, t.c.value('(@id[1])', 'VARCHAR(50)') AS Productid, t.c.value('(@score[1])', 'VARCHAR(50)') AS score FROM @Users yt cross APPLY yt.suggestions.nodes('Products/Product') t(c) --select * from @Users1 ;With CTE1 as ( select @userId userid, t.c.value('(@id[1])', 'VARCHAR(50)') AS Productid, t.c.value('(@score[1])', 'VARCHAR(50)') AS score from @suggestions.nodes('Products/Product') t(c) ) Merge @users1 as trg using cte1 as src on trg.userid=src.userid and trg.productid=src.productid when not matched then insert (userid,productid,score) values(src.userid,src.productid,src.score); select distinct a.userid ,(select b.productid as '@Productid',b.score as '@Score' from @users1 b where a.userid=b.userid for xml path('Product'),root('Products')) from @users1 a ```
SQL Server - merge two XML using only .modify()
[ "", "sql", "sql-server", "xml", "t-sql", "xpath", "" ]
It is often convenient in PosgreSQL to create "tables" on the fly so to refer to them, e.g. ``` with selected_ids as ( select 1 as id ) select * from someTable where id = (select id from selected_ids) ``` Is it impossible to provide multiple values as `id` this way? I found [this answer](https://stackoverflow.com/questions/17532603/how-to-create-an-on-the-fly-mapping-table-within-a-select-statement-in-postgre) that suggests using `values` for similar problem, but I have problem with translating it to the example below. I would like to write subqueries such as ``` select 1 as id union select 2 as id union select 7 as id ``` or ``` select 1 as id, 'dog' as animal union select 7 as id, 'cat' as animal ``` in more condensed way, without repeating myself.
You can use arguments in the query alias: ``` with selected_ids(id) as ( values (1), (3), (5) ) select * from someTable where id = any (select id from selected_ids) ``` You can also use `join` instead of a subquery, example: ``` create table some_table (id int, str text); insert into some_table values (1, 'alfa'), (2, 'beta'), (3, 'gamma'); with selected_ids(id) as ( values (1), (2) ) select * from some_table join selected_ids using(id); id | str ----+------ 1 | alfa 2 | beta (2 rows) ```
You can pass `id` and `animal` field in WITH like this ``` with selected_ids(id,animal) as ( values (1,'dog'), (2,'cat'), (3,'elephant'),(4,'rat')--,..,.. etc ) select * from someTable where id = any (select id from selected_ids) ```
Creating tables on-the-fly
[ "", "sql", "postgresql", "" ]
I have a table like so: ``` DECLARE @deadCommunityList TABLE (community VARCHAR(12)) INSERT INTO @deadCommunityList (community) VALUES ('000') , ('253'), ('COU'), ('COV') , ('D2T'), ('D3T'), ('DEW') , ('DIT'), ('E2T'), ('E3T') , ('EL2'), ('EL3'), ('ELC') , ('ELI'), ('ELT'), ('ERI') , ('FA1'), ('GRA'), ('GRD') , ('GRT'), ('HIG'), ('HIP') , ('LYN'), ('NEW'), ('PAR') , ('PMT'), ('RDT'), ('RES') , ('SCR'), ('SCT'), ('SMT') , ('SUM'), ('TB'), ('W2T') , ('WDV'), ('WE2'), ('WIC') , ('WEC'), ('WIL'), ('ZIT') ``` and now I have this query like so: ``` SELECT DISTINCT AREA_DESC FROM V_CONSTAT_PROJ_DATES WHERE AREA_ID NOT IN @deadCommunityList ORDER BY AREA_DESC ``` What I am trying to say is give me the results where AREA\_ID is not in the table....this is not working, I get the following error: > Must declare the scalar variable "@deadCommunityList". What Am I doing wrong?
Try this: ``` SELECT DISTINCT AREA_DESC FROM V_CONSTAT_PROJ_DATES WHERE AREA_ID NOT IN (SELECT community FROM @deadCommunityList) ORDER BY AREA_DESC ``` `@deadCommunityList` is a *table* variable, so you have to `SELECT` from its values.
``` DECLARE @deadCommunityList TABLE ( community VARCHAR(12) PRIMARY KEY --WITH (IGNORE_DUP_KEY=ON) ) INSERT INTO @deadCommunityList ... SELECT DISTINCT AREA_DESC FROM V_CONSTAT_PROJ_DATES WHERE AREA_ID NOT IN (SELECT e.community FROM @deadCommunityList Π΅) ORDER BY AREA_DESC --OPTION(RECOMPILE) ```
SQL Stored Procedure WHERE item is not in table
[ "", "sql", "sql-server", "t-sql", "stored-procedures", "" ]
I got two sql statements: ``` SELECT username AS userA FROM WHERE username='abc' SELECT username AS userB FROM WHERE username='abcd' AND password='abcd' ``` I want to output the result as below: ``` userA | userB ---------------- abc | abcd ``` How to combine two SQL statements?
``` SELECT tableA.username as userA tableB.username as userB FROM tableA CROSS JOIN table B WHERE tableA.username='abc' AND tableB.username='abcd' AND tableB.password='abcd'; ``` CROSS JOIN! :P CARTESIAN PRODUCT!
You can combine tables and fields with the syntax: ``` SELECT tableA.username as userA tableB.username as userB FROM tableA JOIN table B WHERE tableA.username='abc' AND tableB.username='abcd' AND tableB.password='abcd'; ``` Unless there's some kind of relation between the tables, you'll be getting the cross product times rows, so it might be an NxM result which is just useless.
SQL to make new column(mySQL)
[ "", "mysql", "sql", "" ]
I have three tables like below. ``` Courses: Id Name 1 A 2 B 3 C 4 D Prices: Id CId Price 1 1 200 2 2 150 3 3 500 4 4 300 Payment: Id CId 1 1 2 4 ``` If a record of payment for Course record exists, then Price should be -1. I want to get result like below: ``` Id Name Price 1 A -1 2 B 150 3 C 500 4 D -1 ``` How should I query mysql?
Try This ``` SELECT c.Id, c.name, CASE WHEN pa.id IS NOT NULL THEN -1 ELSE pr.price END as Price FROM Courses c LEFT JOIN Prices pr ON c.Id = pr.CId LEFT JOIN Payment pa ON c.id = pa.CId; ``` OR ``` SELECT c.Id, c.name, CASE WHEN pa.id IS NULL THEN pr.price ELSE -1 END as Price FROM Courses c LEFT JOIN Prices pr ON c.Id = pr.CId LEFT JOIN Payment pa ON c.id = pa.CId; ```
SELECT p.id,c.NAME, CASE WHEN p.cid=e.cid THEN -1 ELSE p.price END AS price FROM prices p LEFT JOIN courses c ON c.id=p.ID LEFT JOIN payment e ON p.cid=e.cid
Joining three table in mysql
[ "", "mysql", "sql", "database", "" ]
I am working on a project and I realized I am unsure about how to use multiple primary keys. I have a table named "User\_Details" that has the details of Customer ID, email address and password. From my understanding, I can use both Customer ID and email address as the primary key. In this case do I use only one as Primary Key or both? If I use both, do they become composite primary keys? (PS. I have other tables, where the foreign key is the customer ID)
You can only have one primary key, but you could definitely have other unique fields. Usually using an integer / id as primary key is preferred over a string key, and an id is presumably auto assigned, where as email could change - which would be a problem for foreign key relations. Since you already use customer Id as a foreign key in other tables, I would suggest you continue to do that.
You can only have one primary key, but you can have multiple columns in your primary key, alternatively you can also have Unique Indexes on your table, which will work a bit like a primary key in that they will enforce unique values, and will speed up querying of those values. Easiest way tho is a **Composite Primary Key** which is a primary key made from two or more columns. For example: ``` CREATE TABLE userdata ( userid INT, userdataid INT, info char(200), primary key (userid, userdataid), ); ``` Here is more info: [Link](http://weblogs.sqlteam.com/jeffs/archive/2007/08/23/composite_primary_keys.aspx)
Two primary non composite keys in a table
[ "", "mysql", "sql", "rdbms", "" ]
Is there a way to inner join two different Excel spreadsheets using VLOOKUP? In SQL, I would do it this way: ``` SELECT id, name FROM Sheet1 INNER JOIN Sheet2 ON Sheet1.id = Sheet2.id; ``` Sheet1: ``` +----+------+ | ID | Name | +----+------+ | 1 | A | | 2 | B | | 3 | C | | 4 | D | +----+------+ ``` Sheet2: ``` +----+-----+ | ID | Age | +----+-----+ | 1 | 20 | | 2 | 21 | | 4 | 22 | +----+-----+ ``` And the result would be: ``` +----+------+ | ID | Name | +----+------+ | 1 | A | | 2 | B | | 4 | D | +----+------+ ``` How can I do this in VLOOKUP? Or is there a better way to do this besides VLOOKUP? Thanks.
First lets get a list of values that exist in both tables. If you are using excel 2010 or later then in Sheet 3 A2 put the following formula: ``` =IFERROR(AGGREGATE(15,6,Sheet2!$A$1:$A$5000/(COUNTIF(Sheet1!$A$1:$A$5000,Sheet2!$A$1:$A$5000)>0),ROW(1:1)),"") ``` If you are using 2007 or earlier then use this array formula: ``` =IFERROR(SMALL(IF(COUNTIF(Sheet1!$A$1:$A$5000,Sheet2!$A$1:$A$5000),Sheet2!$A$1:$A$5000),ROW(1:1)),"") ``` ***Being an array formula, copy and paste into the formula bar then hit Ctrl-Shift-Enter instead of Enter or Tab to leave the edit mode.*** Then copy down as many rows as desired. This will create a list of ID'd that are in both lists. This does assume that ID is a number and not text. Then with that list we use vlookup: ``` =IF(A2<>"",VLOOKUP(A2,Sheet1!A:B,2,FALSE),"") ``` This will then return the value from Sheet 1 that matches. [![enter image description here](https://i.stack.imgur.com/SmRZw.png)](https://i.stack.imgur.com/SmRZw.png)
You can acheive this result using Microsoft Query. First, select `Data > From other sources > From Microsoft Query` [![enter image description here](https://i.stack.imgur.com/o8CpQ.png)](https://i.stack.imgur.com/o8CpQ.png) Then select "Excel Files\*". In the "Select Workbook" windows, you have to select the current Workbook. Next, in the query Wizard windows, select sheet1$ and sheet2$ and click the ">" button. [![enter image description here](https://i.stack.imgur.com/XaTHd.png)](https://i.stack.imgur.com/XaTHd.png) Click Next and the query visual editor will open. Click on the SQL button and paste this query : ``` SELECT `Sheet1$`.ID, `Sheet1$`.Name, `Sheet2$`.Age FROM`Sheet1$`, `Sheet2$` WHERE `Sheet1$`.ID = `Sheet2$`.ID ``` Finally close the editor and put the table where you need it. The result should look like this : [![enter image description here](https://i.stack.imgur.com/hNp6F.png)](https://i.stack.imgur.com/hNp6F.png)
How to inner-join in Excel (eg. using VLOOKUP)
[ "", "sql", "excel", "inner-join", "vlookup", "" ]
Normally we use a column to filter a sql sentence ``` SELECT col_id, col_name FROM dataTable WHERE col__id>5 ``` It is possible use a position to ORDER: `ORDER BY 1 DESC` Is it possible use a position to filter?? Something like "this": ``` SELECT col_id, col_name FROM dataTable WHERE #2 LIKE '%Peter%' ```
No. The `WHERE` clause is (logically) evaluated before the `SELECT` list, so it cannot refer to `SELECT` list items by number, as they haven't been numbered yet. It's possible to do it in the `ORDER BY` as that's evaluated *after* the list of selected columns is evaluated, so you can refer to columns by number (though that's [not recommended](https://msdn.microsoft.com/en-us/library/ms188385.aspx#Anchor_2)) or alias. Also, even if it were possible, it would make your SQL more fragile and harder to read.
**WHERE** Clause run before select clause here is Logical Query Flow 1. [From] 2. [Where] 3. [Aggregations] 4. [SELECT] 5. [Order By] 6. [Over] 7. [Distinct] 8. [Top] [FOR EXAMPLE](http://sqlearth.blogspot.in/2015/05/how-sql-select-statement-logically-works.html)
Is it possible to SELECT sqlserver data using WHERE with position?
[ "", "sql", "sql-server", "" ]
I have a requirement where I have to show data in cumulative concatenation style, just like running total by group. Sample data ``` Col1 Col2 1 a 1 b 2 c 2 d 2 e ``` Expected output: ``` Col1 Col2 1 a 1 b,a 2 c 2 d,c 2 e,d,c ``` The concatenation needs to be broken down by Col1. Any help regarding how to get this result by Oracle SQL will be appreciated.
Assuming something on the way you need to order, this can be a solution, based on [Hierarchical Queries](https://docs.oracle.com/cd/B19306_01/server.102/b14200/queries003.htm): ``` with test as ( select 1 as col1, 'a' as col2 from dual union all select 1 as col1, 'b' as col2 from dual union all select 2 as col1, 'c' as col2 from dual union all select 2 as col1, 'd' as col2 from dual union all select 2 as col1, 'e' as col2 from dual ) select col1, col2 from ( select col1 AS col1, sys_connect_by_path(col2, ',') AS col2, connect_by_isleaf leaf from ( select row_number() over (order by col1 asc, col2 desc) as num, col1, col2 from test ) connect by nocycle prior col1 = col1 and prior num = num -1 ) where leaf = 1 order by col1, col2 ```
Try: ``` WITH d AS ( select col1, col2, row_number() over (partition by col1 order by col2) as x from tab_le ), d1( col1, col2, x, col22) as ( SELECT col1, col2, x, col2 col22 FROM d WHERE x = 1 UNION ALL SELECT d.col1, d.col2, d.x, d.col2 || ',' || d1.col22 FROM d JOIN d1 ON (d.col1 = d1.col1 AND d.x = d1.x + 1) ) SELECT * FROM d1 order by 1,2; ```
Cumulative string concatenation
[ "", "sql", "oracle", "" ]
I have a data-set with a timestamp column and multiple identifier columns. I want to condense it to a single row for each "block" of adjacent rows with equal identifiers, when ordered by the timestamp. The min and max timestamp for each block is required. Source Data: ``` TSTAMP ID1 ID2 t1 A B <= start of new block t2 A B t3 C D <= start of new block t4 E F <= start of new block t5 E F t6 E F t7 A B <= start of new block t8 G H <= start of new block ``` Desired Result: ``` MIN_TSTAMP MAX_TSTAMP ID1 ID2 t1 t2 A B t3 t3 C D t4 t6 E F t7 t7 A B t8 t8 G H ``` I thought this was ripe for a window-ing analytic function but I cannot partition without grouping ALL equal combinations of `IDn` - rather than only those in adjacent rows, when ordered by timestamp. A workaround is to create a key column first in an in-line view that I can later group by i.e. with same value for each row in the block and different value for each block. I can do this using LAG analytic function to compare row values and then calling a PL/SQL function to return nextval/currval values of a sequence (calling nextval/currval directly in the SQL is restricted in this context). ``` select min(ilv.tstamp), max(ilv.tstamp), id1, id2 from ( select case when (id1 != lag(id1,1,'*') over (partition by (1) order by tstamp) or id2 != lag(id2,1,'*') over (partition by (1) order by tstamp)) then pk_seq_utils.gav_get_nextval else pk_seq_utils.gav_get_currval end ident, t.* from tab1 t order by tstamp) ilv group by ident, id1, id2 order by 1; ``` where the `gav_get_xxx` functions simply return currval/nextval from a sequence. But I would like to use SQL only and avoid PL/SQL (as I could also write this easily in PL/SQL and pipe out the result-rows from a pipeline function). Any ideas? Thanks.
You can use [an analytic 'trick'](https://stackoverflow.com/questions/4324623/sql-group-by-only-rows-which-are-in-sequence/4324654#4324654) to identify the gaps and islands, comparing the position of each row just against the `tstamp` across all rows with its position against `tstamp` just for that `id2, id2` combination: ``` select tstamp, id1, id2, row_number() over (partition by id1, id2 order by tstamp) - row_number() over (order by tstamp) as block_id from tab1; TS I I BLOCK_ID -- - - ---------- t1 A B 0 t2 A B 0 t3 C D -2 t4 E F -3 t5 E F -3 t6 E F -3 t7 A B -4 t8 G H -7 ``` The actual value of `block_id` doesn't matter, just that it's unique for each block for the combination. You can then group using that: ``` select min(tstamp) as min_tstamp, max(tstamp) as max_tstamp, id1, id2 from ( select tstamp, id1, id2, row_number() over (partition by id1, id2 order by tstamp) - row_number() over (order by tstamp) as block_id from tab1 ) group by id1, id2, block_id order by min(tstamp); MI MA I I -- -- - - t1 t2 A B t3 t3 C D t4 t6 E F t7 t7 A B t8 t8 G H ```
[Tabibitosan](http://rwijk.blogspot.com/2014/01/tabibitosan.html) to the rescue! ``` with sample_data as (select 't1' tstamp, 'A' id1, 'B' id2 from dual union all select 't2' tstamp, 'A' id1, 'B' id2 from dual union all select 't3' tstamp, 'C' id1, 'D' id2 from dual union all select 't4' tstamp, 'E' id1, 'F' id2 from dual union all select 't5' tstamp, 'E' id1, 'F' id2 from dual union all select 't6' tstamp, 'E' id1, 'F' id2 from dual union all select 't7' tstamp, 'A' id1, 'B' id2 from dual union all select 't8' tstamp, 'G' id1, 'H' id2 from dual) select min(tstamp) min_tstamp, max(tstamp) max_tstamp, id1, id2 from (select tstamp, id1, id2, row_number() over (order by tstamp) - row_number() over (partition by id1, id2 order by tstamp) grp from sample_data) group by id1, id2, grp order by min(tstamp); MIN_TSTAMP MAX_TSTAMP ID1 ID2 ---------- ---------- --- --- t1 t2 A B t3 t3 C D t4 t6 E F t7 t7 A B t8 t8 G H ```
Oracle 11.2 SQL - help to condense data in ordered set
[ "", "sql", "oracle", "gaps-and-islands", "" ]
I Have Table ( **emp** ): [![emp Table](https://i.stack.imgur.com/7E6BF.png)](https://i.stack.imgur.com/7E6BF.png) I want to get **First Column** * **DeptNo** Second Column * **SUM of salaries of the employees of each department whose name start with 'A'** and Third column * **Total Salaries of all the employees of that department** I am able to Get first and second Column but unable to get Thiird column with same Query: ``` SELECT DeptNo, SUM(Salary) AS 'Name Contains A', SUM(Salary) AS 'Total Salary' FROM emp WHERE Ename LIKE 'A%' GROUP BY DeptNo ``` Any Suggestions how can i do this?
If your DBMS supports `CASE`, you can do something like this: ``` select deptno, sum(case when ename like 'A%' then salary else 0 end) as ASalary, sum(salary) as AllSalaries from emp group by deptno ``` This produces ``` deptno ASalary AllSalaries ------ ------- ----------- 10 300 500 12 100 100 14 0 500 15 400 700 ```
You can do it with a subquery: ``` SELECT DeptNo, SUM(Salary) AS 'Name Contains A', (SELECT SUM(Salary) FROM emp) AS 'Total Salary' FROM emp WHERE Ename LIKE 'A%' GROUP BY DeptNo ```
How to Get Sum of Specific Records and Sum of all Records in One Query?
[ "", "mysql", "sql", "database", "" ]
This static query using a parameter returns one row: ``` DECLARE @sqltext nvarchar(max) DECLARE @status varchar(100) SET @status = 'Active: Complete' select * from status where StatusTitle = @status and type_id = 800 ``` --- However, using the parameter in a dynamic SQL query as follows does not return any rows at all: ``` SET @sqltext = 'SELECT s.StatusID FROM [Status] s WHERE ( ( s.StatusTitle ='' + @status + '' ) AND s.[type_id] = 800 )' EXECUTE sp_executesql @sqltext ``` I know there is something wrong with the WHERE clause. I can't figure out how to correct it.
You need to format the `@status` into the statement. Executing a statement using `sp_executesql` creates a new scope and local variables from another scope will not be visible. In your statement you can remedy this by adding extra quotes around `@status`. But to guard against SQL injection, you first need to replace single quotes with two single quotes in `@status`. ``` SET @status=REPLACE(@status,'''',''''''); SET @sqltext = 'SELECT s.StatusID FROM [Status] s WHERE ( ( s.StatusTitle =''' + @status + ''' ) AND s.[type_id] = 800 )' EXECUTE sp_executesql @sqltext; ``` Better still is to supply `sp_executesql` with an INPUT parameter. That way there's no need for the extra quotes and stuff, and protection against SQL Injection is guaranteed this way. ``` SET @sqltext = 'SELECT s.StatusID FROM [Status] s WHERE ( ( s.StatusTitle = @status ) AND s.[type_id] = 800 )' EXECUTE sp_executesql @sqltext, N'@status VARCHAR(100)', @status; ```
this will also save you from sql injection ``` DECLARE @sqltext nvarchar(max) DECLARE @status varchar(100) SET @status = 'Active: Complete' SET @sqltext = 'SELECT s.StatusID FROM [Status] s WHERE ( ( s.StatusTitle = @status ) AND s.[type_id] = 800 )' DECLARE @params NVARCHAR(99) SET @params = '@status nvarchar(99)' EXECUTE sp_executesql @sqltext, @params, @status ```
Using local variable inside dynamic SQL does not work
[ "", "sql", "sql-server", "dynamic-sql", "" ]
I have 2 tables :- `Movies(movieID,movieTitle,Director)` and `Rating(ratingID,movieID,movieRating,ratingDate)` Now i write the following syntax to be queried on these Tables - `SELECT movieTitle,max(movieRating) FROM Movie M,Rating R WHERE R.movieID = M.movieID GROUPBY R.movieID;` What i would like to ask is: Provided that for each subgroup formed by `movieID` i have a unique `max(movieRating)` value i get only one row of `movieTitle` and `max(movieRating)` for each `movieID` sub-group. Shouldn't i be getting many duplicate rows of `movieTitle` and `max(movieRating)` for how many ever times a `movieTitle` entry exists in the Table ? Let me illustrate my doubt with an example - Assuming my entries are as follows(I have only included the relevant ones): ``` movieID movieTitle movieRating 1 Lost Stars 3 1 Lost Stars 4 1 Lost Stars 5 ``` When i run the query on these values i get `Lost Stars 5` as the answer.That is the row is returned only once as the answer. Shouldn't i be getting - `Lost Stars 5` `Lost Stars 5` `Lost Stars 5` as the answer. That is shouldn't i be getting the value 3 times(for each time a `Lost Stars` entry exists in the table)? Could you please explain why it works in this fashion? I am new to SQL and i apologize if this is a very basic doubt. I would really appreciate any help i can get.
GROUP BY will group records and return distinct values. In your case, the records all share the same movie title and id. The only distinct value is the rating, and you're requesting the max, not the actual ratings associated with each record. Compare your result to the result from: ``` SELECT R.movieTitle,M.movieRating,max(M.movieRating) FROM Movie M,Rating R WHERE R.movieID = M.movieID GROUP BY R.movieID, R.movieTitle, M.movieRating; ``` NOTE: you must group by any field in your select clause that is not an aggregate, so you must group by movieTitle if you're selecting it. The result from that query will give you all 3 records because it's selecting the movieRating value, which differs on all of the records.
You're grouping by the `movieID`, which means that you'll get only one row for each `movieID` with all aggregates rolled up to those groups. If those rows had the same `movieTitle`, but some rows had different `movieID` values then you would see multiple rows, but that's a quirk in MySQL. I believe that most RDBMSs will give you an error for trying to return a column that is not in a `GROUP BY` and is not part of an aggregate function.
How does an aggregate function affect the columns queried along with it?
[ "", "mysql", "sql", "database", "" ]
The problem is that i have a table Employee in which a field employeeAddress(varchar(200)), I m creating a View EmployeeView in SQL Server and i want to change it to the employeeAddress(varchar(50)), means i want to show only 50 characters of employeeAddress column and after that concatenate with '...'. For Example :- New Delhi(Suppose it is of 100 character) i want to change it in to the EmployeeView as NewDel(suppose it is the 50 characters) then it should be as "NewDel..."
Most of the existing answers seem to be missing the need for the conditional ellipsis: ``` CREATE VIEW dbo.EmployeeView WITH SCHEMABINDING AS SELECT /* Other Columns */ CASE WHEN LEN(EmployeeAddress) > 50 THEN SUBSTRING(EmployeeAddress,1,50) + '...' ELSE EmployeeAddress END as EmployeeAddress FROM dbo.Employee ```
You can try this ``` create view DemoView as select name=SUBSTRING(name,1,50)+'...' from tableName; ```
How to change the size of a column in a view dynamically
[ "", "sql", "sql-server", "" ]
I have a sql table named Parentids of ids and their parent's id (where 0 represents no parent) like follows: ``` id | parentid -------------- 1 | 0 2 | 0 3 | 1 4 | 0 5 | 2 6 | 2 7 | 3 8 | 1 ``` From this table I need to a sql query to return a table with the number of children for each id that should result in the following: ``` id | childrenCnt -------------- 1 | 2 2 | 2 3 | 1 4 | 0 5 | 0 6 | 0 7 | 0 8 | 0 ``` i have the following sql query but it does not seem to work: ``` SELECT id ,sum(CASE WHEN parentid = tid THEN 1 ELSE 0 END) AS childrenCnt FROM Parentids ```
You could do it with **group by** on parentId, Only members with children: ``` select parentId, COUNT(*) from Parentids where parentId <> 0 group by parentId ``` **EDIT:** All members, to match exact OP expected result: ``` select parentId, COUNT(*) from Parentids group by parentId order by 2,1 ```
One method is using a `left join` and aggregation. However, a correlated subquery might even work better: ``` select p.id, (select count(*) from parentids p2 where p2.parentid = p.id ) as childrenCnt from parentids p; ```
sql hierarchy count
[ "", "mysql", "sql", "sum", "hierarchy", "" ]
I have a report that has multiple multi-value parameter. What I wanted to do is if the parameter is = Select All I'll remove that parameter to my SQL Script. Example is I have a Product group and Product Name parameter and what I want is if the user selects all the product group my script will be like: ``` SELECT * FROM TABLE WHERE PRODUCT_NAME IN (@ProductName) ``` While if the user did not select all Product Group, my script will be like: ``` SELECT * FROM TABLE WHERE PRODUCT_NAME IN (@ProductName) AND PRODUCT_GROUP IN (@ProductGroup) ``` I want to know how Can I detect when multi-value parameter is = Select All. I think it will really help the loading time of the tool if I just remove the filter on my script.
As far as I know, you cannot detect when "Select All" has been checked by the user. To remove the filter completely when the user selects all the choices in a multi-valued parameter you would have to employ logic in your stored procedure that checks to see if all the choices were passed to it in the parameter, and if so, don't use the parameter at all in the main query.
You can add your own `ALL` value to the parameter. In the dataset query check if the user have selected 'ALL' if so don't use the parameter. Something like this: ``` IF ('ALL' IN @ProductGroup) BEGIN SELECT * FROM TABLE WHERE PRODUCT_NAME IN (@ProductName) END ELSE SELECT * FROM TABLE WHERE PRODUCT_NAME IN (@ProductName) AND PRODUCT_GROUP IN (@ProductGroup) ``` It is not tested but should work
SSRS: When Multi-value Parameter = Select ALL remove filter in Script
[ "", "sql", "sql-server", "reporting-services", "ssrs-2012", "" ]
Good Afternoon, Currently I have functioning code that works for determine the number of opened work orders and the number of closed work orders during a period of time. I now want to add the number of work orders that were opened prior to that period of time; I have the code that produces the results I want but I am having difficulties figuring out how I would have my select statement built into the initial code so I do not have to assign a static date. It will be easier to show you what I currently have and what I expect to get. ``` SELECT o.dateinfo ,opened ,closed FROM ( SELECT MONTH(org_date) + (YEAR(org_date) * 100) AS dateinfo ,COUNT(wo) AS opened FROM wkaw WHERE org_date >= DATEFROMPARTS(YEAR(GETDATE()) - 1, MONTH(GETDATE()), 01) GROUP BY MONTH(org_date) + (YEAR(org_date) * 100) ) o INNER JOIN ( SELECT MONTH(cmpl_date) + (YEAR(cmpl_date) * 100) AS dateinfo ,COUNT(wo) AS closed FROM wkaw WHERE cmpl_date >= DATEFROMPARTS(YEAR(GETDATE()) - 1, MONTH(GETDATE()), 01) AND cmpl_date IS NOT NULL GROUP BY MONTH(cmpl_date) + (YEAR(cmpl_date) * 100) ) c ON o.dateinfo = c.dateinfo ORDER BY o.dateinfo ``` Produces these results currently ``` dateinfo | opened | closed -------------------------------- 201502 | 285 | 587 201503 | 519 | 345 201504 | 494 | 398 201505 | 415 | 430 201506 | 578 | 465 201507 | 409 | 646 201508 | 501 | 417 201509 | 430 | 347 201510 | 491 | 983 201511 | 657 | 455 201512 | 468 | 366 201601 | 723 | 1105 201602 | 54 | 60 ``` I then have the following select statements that is getting the information I would want for each of those terms but they are using static dates instead of auto-generating like I did above. ``` SELECT COUNT(wo) FROM wkaw WHERE (cmpl_date >= DATEFROMPARTS(2015,02,01) OR cmpl_date IS NULL) AND org_date < DATEFROMPARTS(2015,02,01) SELECT COUNT(*) FROM wkaw WHERE (cmpl_date >= DATEFROMPARTS(2015,03,01) OR cmpl_date IS NULL) AND org_date < DATEFROMPARTS(2015,03,01) SELECT COUNT(*) FROM wkaw WHERE (cmpl_date >= DATEFROMPARTS(2015,04,01) OR cmpl_date IS NULL) AND org_date < DATEFROMPARTS(2015,04,01) SELECT COUNT(*) FROM wkaw WHERE (cmpl_date >= DATEFROMPARTS(2015,05,01) OR cmpl_date IS NULL) AND org_date < DATEFROMPARTS(2015,05,01) SELECT COUNT(*) FROM wkaw WHERE (cmpl_date >= DATEFROMPARTS(2015,06,01) OR cmpl_date IS NULL) AND org_date < DATEFROMPARTS(2015,06,01) SELECT COUNT(*) FROM wkaw WHERE (cmpl_date >= DATEFROMPARTS(2015,07,01) OR cmpl_date IS NULL) AND org_date < DATEFROMPARTS(2015,07,01) SELECT COUNT(*) FROM wkaw WHERE (cmpl_date >= DATEFROMPARTS(2015,08,01) OR cmpl_date IS NULL) AND org_date < DATEFROMPARTS(2015,08,01) SELECT COUNT(*) FROM wkaw WHERE (cmpl_date >= DATEFROMPARTS(2015,09,01) OR cmpl_date IS NULL) AND org_date < DATEFROMPARTS(2015,09,01) SELECT COUNT(*) FROM wkaw WHERE (cmpl_date >= DATEFROMPARTS(2015,10,01) OR cmpl_date IS NULL) AND org_date < DATEFROMPARTS(2015,10,01) SELECT COUNT(*) FROM wkaw WHERE (cmpl_date >= DATEFROMPARTS(2015,11,01) OR cmpl_date IS NULL) AND org_date < DATEFROMPARTS(2015,11,01) SELECT COUNT(*) FROM wkaw WHERE (cmpl_date >= DATEFROMPARTS(2015,12,01) OR cmpl_date IS NULL) AND org_date < DATEFROMPARTS(2015,12,01) SELECT COUNT(*) FROM wkaw WHERE (cmpl_date >= DATEFROMPARTS(2016,01,01) OR cmpl_date IS NULL) AND org_date < DATEFROMPARTS(2016,01,01) SELECT COUNT(*) FROM wkaw WHERE (cmpl_date >= DATEFROMPARTS(2016,02,01) OR cmpl_date IS NULL) AND org_date < DATEFROMPARTS(2016,02,01) ``` The results for those are as follows ``` 1 | 2187 1 | 1885 1 | 2059 1 | 2155 1 | 2140 1 | 2253 1 | 2016 1 | 2100 1 | 2183 1 | 1691 1 | 1893 1 | 1995 1 | 1613 ``` These numbers do show what I would expect. The results that I want to see though is ``` dateinfo | open_wo_count | opened | closed --------------------------------------------------- 201502 | 2187 | 285 | 587 201503 | 1885 | 519 | 345 201504 | 2059 | 494 | 398 201505 | 2155 | 415 | 430 201506 | 2140 | 578 | 465 201507 | 2253 | 409 | 646 201508 | 2016 | 501 | 417 201509 | 2100 | 430 | 347 201510 | 2183 | 491 | 983 201511 | 1691 | 657 | 455 201512 | 1893 | 468 | 366 201601 | 1995 | 723 | 1105 201602 | 1613 | 54 | 60 ``` I've tried some different methods but they have all produced un-expected results and I am sure it is just because I do not know how. Any help would be appreciated. Cheers, Johnathan
Try using a sub select. I also had to CONVERT the dates to be able to compare them to o.dateinfo. This way you don't have to manually build the dates since they already exist in dateinfo. ``` SELECT o.dateinfo , (SELECT COUNT(*) FROM wkaw WHERE (CONVERT(CHAR(6), cmpl_date, 112) >= o.dateinfo OR cmpl_date IS NULL) AND CONVERT(CHAR(6), org_date, 112) < o.dateinfo) AS open_wo_count ,opened ,closed FROM ( SELECT MONTH(org_date) + (YEAR(org_date) * 100) AS dateinfo ,COUNT(wo) AS opened FROM wkaw WHERE org_date >= DATEFROMPARTS(YEAR(GETDATE()) - 1, MONTH(GETDATE()), 01) GROUP BY MONTH(org_date) + (YEAR(org_date) * 100) ) o INNER JOIN ( SELECT MONTH(cmpl_date) + (YEAR(cmpl_date) * 100) AS dateinfo ,COUNT(wo) AS closed FROM wkaw WHERE cmpl_date >= DATEFROMPARTS(YEAR(GETDATE()) - 1, MONTH(GETDATE()), 01) AND cmpl_date IS NOT NULL GROUP BY MONTH(cmpl_date) + (YEAR(cmpl_date) * 100) ) c ON o.dateinfo = c.dateinfo ORDER BY o.dateinfo ```
You could make a table that stores all the start-of-month dates you need. (This could either be a utility table that you generate once, or it could be a temp table that gets generated on the fly.) In your case we can probably cheat a little. **As long as each month has at least one work order in it** we can use the work order table itself to make a list of the dates we need. Something like: ``` SELECT DISTINCT(DATEFROMPARTS(YEAR(org_date),MONTH(org_date),01)) AS d FROM wkaw ``` I believe these are the dates you wanted. Now we select the dates from this "table" along with a subquery that counts the open workorders as of each date. Something like: ``` SELECT MONTHSTART.d, (SELECT COUNT(wo) FROM wkaw WHERE (cmpl_date >= MONTHSTART.d OR cmpl_date IS NULL) AND org_date < MONTHSTART.d) FROM (SELECT DISTINCT(DATEFROMPARTS(YEAR(org_date),MONTH(org_date),01)) AS d FROM wkaw) MONTHSTART ``` This doesn't quite give you your final query, but its pretty close. To join it back to your original query, try replacing `SELECT MONTHSTART.d` with `SELECT MONTH(MONTHSTART.d) + (YEAR(MONTHSTART.d)*100) as dateinfo` and combine that with your original query.
Unsure how to do multiple nested SELECT statements while using GROUP BY
[ "", "sql", "sql-server", "" ]
I am strugling forming a query to accomplish what I need using LEFT JOIN. I'm afraid that I'm going about this the wrong way, so I reach out to the community. I have two databases, one for **categories** and one for **items** as follows ``` CATEGORIES id name private 1 Apples 1 2 Oranges 1 3 Grapes 0 ITEMS id name category 1 Mcintosh 1 2 Fuji 1 3 Green 3 ``` **The Question** What I need to do is form a query that will select only the CATEGORIES that are private = 1 and that also have cases assigned to their category id (ie. cannot have 0 cases such as the Oranges category). So in this case, my query would result in only 1 result: Apple
``` select c.* from categories c, (select distinct category from items) i where c.id = i.category and c.private = 1 ```
What you need is `INNER JOIN` and `DISTINCT` ``` SELECT DISTINCT name FROM CATEGORIES INNER JOIN ITEMS ON CATEGORIES.id=ITEMS.category WHERE CATEGORIES.private=1 ```
How to form a query across two tables
[ "", "mysql", "sql", "" ]
``` BREAK ON DEPTNO SKIP 1 compute sum of sal on deptno SELECT deptno, empno, ename,sal FROM (SELECT deptno, empno, ename, sal FROM emp ) WHERE EXISTS (SELECT deptno FROM dept) order by 1,2 , sal desc ; ``` How can I get two highest sal from emp, and what is wrong with my code?
It is not entirely clear what you want. In the title you say "two highest salary", but in the comment you mention something about a sum. The following will show the two highest salaries. If there are multiple "highest" salaries, all will be shown ``` select deptno, empno, ename, sal from ( SELECT deptno, empno, ename, sal, dense_rank() over (order by sal desc) as rnk FROM emp ) where rnk <= 2 order by sal desc; ``` To get this per department, you can use this: ``` select deptno, dept_salary from ( select deptno, dept_salary, dense_rank() over (order by dept_salary desc) as rnk from ( SELECT deptno, sum(sal) as dept_salary FROM emp group by deptno ) t1 ) t2 where rnk <= 2 order by dept_salary desc ```
If you want all rows with the two highest distinct salaries in each department, then use `dense_rank()` as follows: ``` select deptno, empno, ename, sal from (select e.*, dense_rank() over (partition by deptno, order by sal desc) as seqnum from emp e ) e where seqnum <= 2 order by deptno, sal desc; ``` It looks like the question will be deleted, but it might as well have a correct answer.
How to get two highest salary in emp table?
[ "", "sql", "oracle", "rownum", "" ]
I have 10 columns (items) in one table, 10 in another table. I want to identify cases where any one of the columns/items in the one table match any one of the columns/items in the other table. My code below is for three items and is inefficient - how can it be modified for 10 columns/items? ``` CASE WHEN (t1.[item1] = t2.[item1] OR t1.[item1] = t2.[item2] OR t1.[item1] = t2.[item3]) OR (t1.[item2] = t2.[item2] OR t1.[item2] = t2.[item1] OR t1.[item2] = t2.[item3]) OR (t1.[item3] = t2.[item3] OR t1.[item3] = t2.[item1] OR t1.[item3] = t2.[item2]) THEN 1 ELSE 0 END AS [match] ```
I don't know if this is more efficient, but it's simpler to write: Use IN instead of equality test. e.g., ``` WHEN t1.item1 IN (T2.item1, T2.item2, T2.item3) ``` For a more efficient way, (but more involved process) use UNPIVOT to convert each column value into a separate row for each table. If the new tables are only used for this process, put a clustered index on the current key-column, as well as the item column. From here, you could do a JOIN to find matching values.
If you do not want to normalize the table you can use "IN" syntax and generate case condition in program: ``` SELECT tbl1.id, tbl1.a, tbl1.b, tbl1.c, MAX(CASE WHEN tbl1.a IN (tbl2.a,tbl2.b,tbl2.c) OR tbl1.b IN (tbl2.a,tbl2.b,tbl2.c) OR tbl1.c IN (tbl2.a,tbl2.b,tbl2.c) THEN 1 ELSE 0 END) AS match FROM (VALUES (1,1,2,3),(2,2,3,4),(3,3,4,5)) AS tbl1(id,a,b,c), (VALUES (1,6,7,8),(2,7,8,9),(3,1,5,8)) AS tbl2(id,a,b,c) GROUP BY tbl1.id,tbl1.a,tbl1.b,tbl1.c ```
SQL Server : CASE when any of 10 columns agree
[ "", "sql", "sql-server-2008", "case", "" ]
I have a workers table and an associated workerGeofence table. ``` CREATE TABLE IF NOT EXISTS `workergeofences` ( `ID` int(11) NOT NULL, `WorkerID` varchar(20) NOT NULL, `GeofenceID` int(11) NOT NULL, `isActive` tinyint(4) NOT NULL ) ENGINE=InnoDB AUTO_INCREMENT=107 DEFAULT CHARSET=latin1; ``` I need to return only workers who have at least one entry in the workerGeofences table with an isActive of 1. I'm able to get the desired outcome withe following: ``` SELECT distinct w.ID, Title, FName, SName, Email, Birthday, Address, Phone, description, companyID FROM Workers w WHERE companyID = ? and w.ID IN (SELECT WorkerID FROM WorkerGeofences WHERE isActive <> 0) limit ?,10 ``` but the in subquery is exhaustive as when I run the explain, I can see it is scanning the entire table. How would I get around this?
First of all , your join is wrong! you are not comparing any common column on both table, you should add where workerGeofences.workerID = w.id like this: ``` SELECT w.ID, Title, FName, SName, Email, Birthday, Address, Phone, description, companyID FROM Workers w join workerGeofences WHERE workerGeofences.workerID = w.ID companyID = ? and w.ID IN ( SELECT WorkerID FROM WorkerGeofences s WHERE isActive <> 0 and s.workerID = w.id ) limit 0,10 ``` And second, you are not selecting anything from the second table, so the join is unessesary and in your IN statement, you are not comparing the right ID's so your query should be: ``` SELECT w.ID, Title, FName, SName, Email, Birthday, Address, Phone, description, companyID FROM Workers w WHERE companyID = ? and w.ID IN ( SELECT WorkerID FROM WorkerGeofences s WHERE isActive <> 0 and s.workerID = w.ID ) limit 0,10 ``` Also, you can use EXISTS() for that. ``` SELECT w.ID, Title, FName, SName, Email, Birthday, Address, Phone, description, companyID FROM Workers w WHERE companyID = ? and exists ( SELECT 1 FROM WorkerGeofences s WHERE isActive = 1 and s.workerID = w.ID ) limit 0,10 ```
You are on the right track, but you shouldn't need `select distinct`. This slows down queries, unless you know there are duplicates -- and that is unlikely because you are selecting `WOrkers.Id`. ``` SELECT w.* FROM Workers w WHERE w.companyID = ? AND EXISTS (SELECT 1 FROM workerGeofences wg WHERE w.ID = wg.WorkerID AND wg.isActive <> 0 ) LIMIT ?, 10; ``` Then, for this query, you want indexes on `Workers(CompanyId, Id)` and `workerGeofences(WorkerId, isActive)`. Note: I just put in `select *` for convenience. I assume all the columns are coming from the `Workers` table.
optimise IN subquery
[ "", "mysql", "sql", "join", "query-optimization", "in-subquery", "" ]
I have a field in our SQL Server database that is basically two fields concatenated together. It has a descriptor and a number. I want to build a view with just the number so that I can relate it to other tables. The number is actually typed as a nvarchar on the other tables. So from data like this I want to query for just the number portion: ProgramField with values: ``` tst_desc:1 tst_desc:124 tst_desc:1495 tst_desc:20483 ``` So I'd like my query to return a result of: ProgramNumField ``` 1 124 1495 20483 ``` The number is variable in length growing over time and needs to be nvarchar so I can relate it to the other tables in the database. [![enter image description here](https://i.stack.imgur.com/ND4Cl.jpg)](https://i.stack.imgur.com/ND4Cl.jpg)
If your prefix is always `tst_desc:` then you can simply strip it off with `replace`: ``` select replace(ProgramField, 'tst_desc:', '') as ProgramNum from yourTable ``` If prefix can be different but always separated from value with colon, you can use something like: ``` select right(ProgramField, len(ProgramField) - charindex(':', ProgramField)) as ProgramNum from yourTable ```
Easiest way is using STUFF ``` SELECT STUFF(ProgramField, 1, charindex(':', ProgramField), '') FROM yourtable ``` If you have dirty data with more than one colon in some row or colon is missing, you can search from right to left for the first none numeric character, you could use this method, This can handle all sorts of funny data: ``` SELECT STUFF(RIGHT('@'+ProgramField, PATINDEX('%[^0-9]%',REVERSE(ProgramField)+'@')),1,1,'') ```
How to De-Concatenate a Field
[ "", "sql", "sql-server", "t-sql", "" ]
I tried to write a SQL function in OrientDB to do inserts on a table `Feature`. The function body looks like this and it accepts two parameters `w` and `t`. `insert into Feature (weight, title) values (w, t) return @rid` When I execute this function with the parameters specified as `12` and `some title` the response I get is this. `[ { "@type": "d", "@rid": "#21:24135", "@version": 1, "@class": "Feature", "weight": null, "title": null } ]` It looks as if the parameters have not been set, hence the values `12` and `some title` have no visibility within the function. I also tried using `$` like special characters preceding the parameter name in the function body (refering to the parameter values as `$t` insteaed of `t`) and still didn't get lucky.
You can use this query ``` insert into Feature (weight, title) values (:w, :t) return @rid ```
I think you executed the command `insert into Feature (weight, title) values (w, t) return @rid` but you forgot to insert the quotes when you pass the parameters w and t. Try this code: ``` var g=orient.getGraph(); var ins = g.command('sql','insert into Feature (weight, title) values ('+w+', "'+t+'")'); return ins; ``` **Output**: ``` [ { "@type": "d", "@rid": "#12:-2", "@version": 0, "@class": "Feature", "weight": 14, "title": "sometitle", "@fieldTypes": "weight=d" } ] ``` Hope it helps
OrientDB SQL function parameter values are not accessible inside the function body
[ "", "sql", "function", "orientdb", "" ]
I have a data that looks like this: ![table1](https://s23.postimg.org/3laz5cit7/row_num.png) What I want is to have a row number that is group by GroupCode,Group Description,SubGroup and subgroup class and I want to retain the ordering by account code that will look like this: ![table2](https://s17.postimg.org/cjie8gvlb/row_num2.png) What's the proper way of seting a row number at the same time grouping them?
You are looking for `dense_rank()`: ``` select dense_rank() over (order by GroupCode, GroupDescription, SubGroup) . . . ``` However, this doesn't guarantee the ordering by `accountCode`. That will require more work. First, determine the minimum account code for each grouping, then use `dense_rank()` on that: ``` select t.*, dense_rank() over (order by minac) from (select t.*, min(accountCode) over (partition by GroupCode, GroupDescription, SubGroup) as minac from t ) t ```
You're looking for the `DENSE_RANK` window function: ``` SELECT rn = DENSE_RANK() OVER(ORDER BY GroupCode, GroupDescription, SubgroupClass), * FROM tbl ORDER BY rn, AccountCode ```
ROW_NUMBER and Grouping of data
[ "", "sql", "sql-server", "" ]
How to Split SQL string with (# and ;)? Billing Column where string is saved in below format ``` 548784545S#15/01/2016;84854555545#13/01/2016;45454554545#21/01/2016 ``` Split # and ; I want below format ``` 548784545S 15/01/2016 84854555545 13/01/2016 45454554545 21/01/2016 ``` So Far what i did ``` SELECT SUBSTRING(CNT_SHIPPING_BILLNO, 1, CASE CHARINDEX('#', CNT_SHIPPING_BILLNO) WHEN 0 THEN LEN(CNT_SHIPPING_BILLNO) ELSE CHARINDEX('#', CNT_SHIPPING_BILLNO) - 1 END) AS Billno, SUBSTRING(CNT_SHIPPING_BILLNO, CASE CHARINDEX('#', CNT_SHIPPING_BILLNO) WHEN 0 THEN LEN(CNT_SHIPPING_BILLNO) + 1 ELSE CHARINDEX(';', CNT_SHIPPING_BILLNO) + 1 END, 1000) AS BillDate FROM SHIPPINGBILL ``` Here I am getting only initial Value using above Query ``` 548784545S 15/01/2016;84854555545#13/01/2016;45454554545#21/01/2016 ```
``` DECLARE @t VARCHAR(MAX) = '548784545S#15/01/2016;84854555545#13/01/2016;45454554545#21/01/2016' ;WITH cte AS ( SELECT x = CONVERT(XML,'<p>' + REPLACE(@t, ';', '</p><p>') + '</p>') ) SELECT PARSENAME(val, 2), PARSENAME(val, 1) FROM ( SELECT val = REPLACE(t.c.value('.', 'VARCHAR(MAX)'), '#', '.') FROM cte CROSS APPLY x.nodes('p') t(c) ) t ``` Output - ``` --------------- ------------ 548784545S 15/01/2016 84854555545 13/01/2016 45454554545 21/01/2016 ```
First create a `Table valued` function to split the string with a delimiter of `;`. **Function : fn\_Split** ``` CREATE FUNCTION [dbo].[fn_Split](@text varchar(8000), @delimiter varchar(20) = ' ') RETURNS @Strings TABLE ( position int IDENTITY PRIMARY KEY, value varchar(8000) ) AS BEGIN DECLARE @index int SET @index = -1 WHILE (LEN(@text) > 0) BEGIN SET @index = CHARINDEX(@delimiter , @text) IF (@index = 0) AND (LEN(@text) > 0) BEGIN INSERT INTO @Strings VALUES (@text) BREAK END IF (@index > 1) BEGIN INSERT INTO @Strings VALUES (LEFT(@text, @index - 1)) SET @text = RIGHT(@text, (LEN(@text) - @index)) END ELSE SET @text = RIGHT(@text, (LEN(@text) - @index)) END RETURN END ``` Then use this function. **Query** ``` DECLARE @str AS VARCHAR(MAX); SET @str = '548784545S#15/01/2016;84854555545#13/01/2016;45454554545#21/01/2016'; SELECT LEFT(value, CHARINDEX('#', value, 1) - 1) AS Billno, RIGHT(value, CHARINDEX('#', REVERSE(value), 1) - 1) AS BillDate FROM dbo.fn_Split(@str, ';'); ``` **Result** ``` +-------------+------------+ | Billno | BillDate | +-------------+------------+ | 548784545S | 15/01/2016 | | 84854555545 | 13/01/2016 | | 45454554545 | 21/01/2016 | +-------------+------------+ ```
Split Multipile delimiter from SQL String
[ "", "sql", "sql-server", "" ]
I am currently having to use two different queries to get back the results that I'm looking for. I have tried combining the two queries together, but that ends with me getting a large amount of extra(duplicates) data that I do not need. Below I have the working query listed. ``` SELECT p1.note as Itemcode, order.ID as OrderNo, piece1.ID As Piece1, piece2.ID As Piece2, i1.count as Unit, unit.count as TotalUnits, i1.rack as RackNo, p1.EndDate as Piece1Finish, p2.EndDate as Piece2Finish, unit.group as BatchNo FROM db.dbo.unit JOIN db.dbo.order on order.entry_ID = unit.entry_ID JOIN db.dbo.piece piece1 on piece1.ID_piece = unit.ID_piece_1 JOIN db.dbo.piece piece2 on piece2.ID_piece = unit.ID_piece_3 JOIN db.dbo.items i1 on i1.ID_unit = unit.ID_unit JOIN db.dbo.items i2 on i2.ID_unit = unit.ID_unit JOIN db.dbo.items i3 on i3.ID_unit = unit.ID_unit JOIN db.dbo.items i4 on i4.ID_unit = unit.ID_unit JOIN db.dbo.process p1 on p1.ID_process = i1.ID_process JOIN db.dbo.process p2 on p2.ID_process = i2.ID_process JOIN db.dbo.process p3 on p3.ID_process = i3.ID_process JOIN db.dbo.process p4 on p4.ID_process = i4.ID_process WHERE p1.note like '12A%' and p1.ID_pieceorder = '1' and p1.ID_job = '150' and p2.ID_pieceorder = '3' and p2.ID_job = '150' and i1.count = i2.count and i1.count = i3.count and i1.count = i4.count and i1.rack = i2.rack and p1.note = p2.note and i1.status = '1' and i2.status = '1' and p3.ID_pieceorder = '0' and p4.ID_pieceorder = '2' and p3.ID_job = '153' and p4.ID_job = '151' and i3.status = '0' and i4.status = '0' and order.status <> '4' ORDER BY OrderNo ``` This query works fine. The second set of data I query to find adds the following information ``` SELECT ...(same as above) FROM ...(same as above plus the following) JOIN db.dbo.items i5 on i5.ID_unit = unit.ID_unit JOIN db.dbo.items i6 on i6.ID_unit = unit.ID_unit JOIN db.dbo.process p5 on p5.ID_process = i5.ID_process JOIN db.dbo.process p6 on p6.ID_process = i6.ID_process WHERE p1.note like '12B%' and ... (same as above plus the following)... and p5.ID_pieceorder = '1' and p5.ID_job = '152' and p6.ID_pieceorder = '3' and p6.ID_job = '152' and i5.status = '1' and i6.status = '1' and i1.count = i5.count and i1.count = i6.count ``` When I try to make a combined query, the table joins of `i5`, `i6`, `p5`, and `p6` produce a massive amount of duplicated results for `p1.note like '12A%'` due to it not needing the fields. Is there a method where I can initiate a join in the `WHERE` statement so that it will only use those two tables when `p1.note like '12B%'`? Something along the lines of ``` SELECT .... FROM .... WHERE (p1.note like '12A%' or (p1.note like '12B%' and p5.ID_pieceorder = '1' and p5.ID_job = '152' and p6.ID_pieceorder = '3' and p6.ID_job = '152' and i5.status = '1' and i6.status = '1' and i1.count = i5.count and i1.count = i6.count (JOIN db.dbo.items i5 on i5.ID_unit = unit.ID_unit JOIN db.dbo.items i6 on i6.ID_unit = unit.ID_unit JOIN db.dbo.process p5 on p5.ID_process = i5.ID_process JOIN db.dbo.process p6 on p6.ID_process = i6.ID_process))) ``` I know that the syntax above will not work, but I am looking for a method similar to that. \*\*EDIT FOR TABLE STRUCTURE requested by DRapp I'm going to fill out the results I get with ID\_unit 782327 ``` -db.dbo.items- -ID_item- -ID_process- -count- -status- -rack- -ID_unit- 628335 782328 1 0 25 782327 628336 782330 1 1 25 782327 628337 782331 1 1 25 782327 628338 782333 1 0 25 782327 628339 782335 1 1 25 782327 628340 782336 1 1 25 782327 628341 782337 1 0 25 782327 -db.dbo.process- -ID_process- -ID_unit- -ID_pieceorder- -ID_job- -sequence- 782328 782327 0 50 1 782329 782327 1 5305 1 782330 782327 1 150 1 782331 782327 1 152 2 782332 782327 2 5408 2 782333 782327 2 151 1 782334 782327 3 5308 3 782335 782327 3 150 1 782336 782327 3 152 2 782337 782327 0 153 4 -db.dbo.unit- -ID_unit- -status- -ID_piece_1- -ID_piece_2--ID_piece_3--ID_product-count 782327 2 5305 5408 5308 50 1 ``` db.dbo.items contains a unique key for each "item" that goes into a part, it contains the corresponding process code, the "count" field, the "status" of each "item", the rack for each "item", and the unit ID of each "ITEM". db.dbo.process contains the unique key for each "process", the ID\_unit, `ID_pieceorder`, `ID_job`, and the `sequence`. Each unit starts with `ID_pieceorder` 0 which is establishing the product type, then `ID_piece` 1 for the first piece, etc.. db.dbo.unit contains the unique key for each "unit", the codes for all its pieces, the status of the unit, and `count` which is the number of units. ``` db.dbo.piece is a table with the master list of codes on it. -ID_Piece- -Desc- 5305 14black 5408 14blue 150 Cut ``` The point of needing 6 processes is as follows p1 & i1: Making sure the Piece is Piece 1, is process 150, and that process 150 is complete p2 & i2: Making sure the piece is piece 2, is process 150, and that process 150 is complete p3 & i3: Making sure it the completion process, 153, is not completed. p4 & i4: making sure the connector process, 152, isn't completed(hasn't connected the two pieces). p5 & i5: some pieces require extra work done, this checks and makes sure that process, 152, is completed on piece 1. P6 & i6: The same as p5 & i5 but for piece 2.
if you don't want duplicates use a Union in between them, move you order by clause to the bottom of the second one. Make sure you select columns are in the same order in both statements. ``` select (your columns) From (tables and joins) where (your where) UNION select (your columns) From (tables and joins) where (your where) order by ```
Per chatroom clarification, I have come up with the the following for you to start with and minimally adjust as needed. ``` SELECT u.id_unit, u.count as TotalUnits, u.group as BatchNo, p1.ID_PieceOrder as Piece1Order, p1Cut.ID_Process as Piece1CutProcessID, i1Cut.Count as Item1CutCount, i1Cut.Status as Item1CutStatus, p1Tamp.ID_PieceOrder as Piece1OrderTampered, p1Tamp.ID_Process as Piece1TamperedProcessID, i1Tamp.Count as Item1TamperedCount, i1Tamp.Status as Item1TamperedStatus, p2.ID_PieceOrder as Piece2Order, p2Cut.ID_Process as Piece2CutProcessID, i2Cut.Count as Item2CutCount, i2Cut.Status as Item2CutStatus, p2Tamp.ID_PieceOrder as Piece2OrderTampered, p2Tamp.ID_Process as Piece2TamperedProcessID, i2Tamp.Count as Item2TamperedCount, i2Tamp.Status as Item2TamperedStatus, pSpacer.ID_PieceOrder as SpacerPieceOrder, iSpacer.Count as SpacerCount, iSpacer.Status as SpacerStatus, pSealed.ID_PieceOrder as SealedPieceOrder, iSealed.Count as SealedCount, iSealed.Status as SealedStatus FROM db.dbo.unit u JOIN Process p1 ON u.ID_Unit = p1.ID_Unit AND u.ID_Piece_1 = p1.ID_Job JOIN Process p1Cut -- ALL PIECES MUST BE CUT first ON p1.ID_Unit = p1Cut.ID_Unit AND p1.ID_PieceOrder = p1Cut.ID_PieceOrder AND p1Cut.ID_Job = 150 JOIN Items i1Cut ON p1Cut.id_process = i1Cut.id_process LEFT JOIN Process p1Tamp -- NOT ALL PIECES MUST BE TEMPERED ON p1Cut.ID_Unit = p1Tamp.ID_Unit AND p1Cut.ID_PieceOrder = p1Tamp.ID_PieceOrder AND p1Tamp.ID_Job = 152 LEFT JOIN Items i1Tamp ON p1Tamp.id_process = i1Tamp.id_process JOIN Process p2 ON u.ID_Unit = p2.ID_Unit AND u.ID_Piece_3 = p2.ID_Job JOIN Process p2Cut -- ALL PIECES MUST BE CUT first ON p2.ID_Unit = p2Cut.ID_Unit AND p2.ID_PieceOrder = p2Cut.ID_PieceOrder AND p2Cut.ID_Job = 150 JOIN Items i2Cut ON p2Cut.id_process = i2Cut.id_process LEFT JOIN Process p2Tamp -- NOT ALL PIECES MUST BE TEMPERED ON p2Cut.ID_Unit = p2Tamp.ID_Unit AND p2Cut.ID_PieceOrder = p2Tamp.ID_PieceOrder AND p2Tamp.ID_Job = 152 LEFT JOIN Items i2Tamp ON p2Tamp.id_process = i2Tamp.id_process LEFT JOIN Process PSpacer ON u.ID_Unit = PSpacer.ID_Unit AND u.ID_Job = 151 LEFT JOIN Items ISpacer ON PSpacer.ID_Process = ISpacer.ID_Process LEFT JOIN Process PSealed ON u.ID_Unit = PSealed.ID_Unit AND u.ID_Job = 153 LEFT JOIN Items ISealed ON PSealed.ID_Process = ISealed.ID_Process WHERE u.ID_Unit IN (782327, 782328, 782329 ) ``` -- just a sample of 3 unit IDs to test concept of the revised query structure I started at the unit table. From that, I am taking each piece down its respective path... First to get the piece, then from the piece, its CUT which will always be required. From the CUT to its ITEM status record which will always exist. From the CUT, I am then looking for a TAMPERED status for the same piece order. Since not all glass needs to be tampered, I have this as a LEFT-JOIN. I then LEFT-JOIN to the tampered's item record by the process id. I do the same for the SECOND piece. Then, a LEFT-JOIN to see IF there is a spacing required (can change to JOIN if spacer is ALWAYS required) Finally, a LEFT-JOIN on the job completed entry. Again, don't know if all the entries are pre-filled in up-front for ALL stages of the process vs not. Notice my alias name reference from P1 for the process for piece 1, then the p1Cut to i1Cut, to p1Tamp to i1Tamp. So now, instead of generic p1-p6, I have context to the process or item. Additionally, I just grabbed all the columns from the respective piece, cut, tampered, spacer and sealed steps with clear column names. If you don't want something you can always remove it, but with this, you SHOULD never get any duplicates per a single UNIT. I did not add the join to the order or piece tables respectively. I don't know if piece is really needed unless you had other columns such as the P1.Name, but your table structure dump showed the description. Since I did not add any other qualifiers, you should get the entire status of a single UNIT in one row. I would adjust the WHERE clause to pick a range of a few units you can test and confirm the context. THEN, adjust the where clause to look for your other criteria such as an item count = another or not, or piece within your '%a%' context or not. However, I THINK this will get you well on your way.
SQL join in the where statement
[ "", "sql", "sql-server", "t-sql", "sql-server-2012", "" ]
I have a Table With the below data ``` Table X Seq_no A Claim Payment Balance (to be calculated) 1 abc 100 10 90 (100-10) 2 abc 50 40 (90-50) 3 abc 20 20 (40-20) 1 xyz 150 10 140 (150-10) 1 qwe 200 10 190 (200-10) ``` I need to calculate the column Balance. I am trying with the below Query SQL > ``` Select Seq_no, a, Claim, Payment, CASE When Seq_no =1 then (claim-Payment) Else ( lag(Balance)- Payment over (order by Balance)) END as Balance from table X ``` However i am getting a error ``` ORA-00904: "Balance": invalid identifier 00904. 00000 - "%s: invalid identifier" ``` I believe this is because Balance is not an existing column name. Is there a correct way to achieve the results.? ***Update:*** **\*I missed an important part. \*** The data i have is in the below format: ``` Table X Seq_no A Claim Payment 1 abc 100 10 2 abc 100 50 3 abc 100 20 1 xyz 150 10 1 qwe 200 10 ``` I need the results in the below format. ``` Table X Seq_no A Claim Payment Balance (to be calculated) 1 abc 100 10 90 (100-10) 2 abc 50 40 (90-50) 3 abc 20 20 (40-20) 1 xyz 150 10 140 (150-10) 1 qwe 200 10 190 (200-10) ``` The Seq\_no calculation has been done to make the claim column null for the cases of duplicate claims, which i had figured out already.
You cannot reference balance before create it. You can use a "running sum" for what you want to achieve. Notice that I partitioned by A because you want another balance for every A. ``` Select Seq_no, a, Claim, Payment, sum(nvl(claim,0) - payment) over (partition by A order by seq_no) as Balance from X; ``` Result: ``` SEQ_NO A CLAIM PAYMENT BALANCE 1 abc 100 10 90 2 abc 50 40 3 abc 20 20 1 qwe 200 10 190 1 xyz 150 10 140 ``` **EDIT:** With newer dataset you just need to replace nvl function with a case when seq=1: ``` Select Seq_no, a, case when seq_no=1 then claim else 0 end as Claim, Payment, sum(case when seq_no=1 then claim else 0 end - payment) over (partition by A order by seq_no) as Balance from X; ```
In order to use `balance` as a column identifier, you must go one level deep, i.e. make your existing query as a **sub-query**. **Working demo:** ``` SQL> WITH sample_data AS( 2 SELECT 1 Seq_no, 'abc' A, 100 Claim, 10 Payment FROM dual UNION ALL 3 SELECT 2 Seq_no, 'abc' A, NULL Claim, 50 FROM dual UNION ALL 4 SELECT 3 Seq_no, 'abc' A, NULL Claim, 20 FROM dual UNION ALL 5 SELECT 1 Seq_no, 'xyz' A, 150 Claim, 10 FROM dual UNION ALL 6 SELECT 1 Seq_no, 'qwe' A, 200 Claim, 10 FROM dual 7 ) 8 -- end of sample_data mimicking real table 9 SELECT seq_no, A, claim, payment, 10 CASE 11 WHEN lag(balance) OVER(PARTITION BY A ORDER BY seq_no) IS NULL 12 THEN balance 13 ELSE lag(balance) OVER(PARTITION BY A ORDER BY seq_no) - payment 14 END balance 15 FROM 16 (SELECT seq_no, A, claim, payment, 17 CASE 18 WHEN seq_no = 1 19 THEN claim - payment 20 ELSE lag(claim - payment) OVER(PARTITION BY A ORDER BY seq_no) - payment 21 END balance 22 FROM sample_data 23 ); SEQ_NO A CLAIM PAYMENT BALANCE ---------- --- ---------- ---------- ---------- 1 abc 100 10 90 2 abc 50 40 3 abc 20 20 1 qwe 200 10 190 1 xyz 150 10 140 SQL> ``` **UPDATE** : If you not-null values always for `claim`, then you could do a running diff: ``` SQL> WITH sample_data AS( 2 SELECT 1 Seq_no, 'abc' A, 100 Claim, 10 Payment FROM dual UNION ALL 3 SELECT 2 Seq_no, 'abc' A, 100 Claim, 50 FROM dual UNION ALL 4 SELECT 3 Seq_no, 'abc' A, 100 Claim, 20 FROM dual UNION ALL 5 SELECT 4 Seq_no, 'abc' A, 100 Claim, 10 FROM dual UNION ALL 6 SELECT 5 Seq_no, 'abc' A, 100 Claim, 10 FROM dual UNION ALL 7 SELECT 1 Seq_no, 'xyz' A, 150 Claim, 10 FROM dual UNION ALL 8 SELECT 1 Seq_no, 'qwe' A, 200 Claim, 10 FROM dual 9 ) 10 -- end of sample_data mimicking real table 11 SELECT Seq_no, 12 a, 13 Claim, 14 Payment, 15 CASE 16 WHEN seq_no = 1 17 THEN claim - payment 18 ELSE SUM(claim - payment) over (partition BY A order by seq_no) - claim*(seq_no-1) 19 END balance 20 FROM sample_data; SEQ_NO A CLAIM PAYMENT BALANCE ---------- --- ---------- ---------- ---------- 1 abc 100 10 90 2 abc 100 50 40 3 abc 100 20 20 4 abc 100 10 10 5 abc 100 10 0 1 qwe 200 10 190 1 xyz 150 10 140 7 rows selected. SQL> ```
SQL: Difference in amounts between rows and columns
[ "", "sql", "database", "oracle", "" ]
I was going through the different commands in SQL and I came across `ALTER` command which is referred as DDL (Data Definition Language). We can alter the column and values in it, so we can manipulate the data with this command so why does alter command is not referred as DML (Data Manipulation Language). I have googled and I cannot come across some good explanation, so please help me with this.
`ALTER` command is used to alter the *structure* of the database. And this is what DDL does, i.e. DDL statements are used to define the *database structure* or *schema*. Whereas DML statement is used to manage *data within schema* objects.
DDL - alter the *schema*. This including creating tables, renaming columns, dropping views, etc. Such statements are DDL even though such might create (default value), alter (by conversion), or even lose (removed column) data as part of the process. Basically, any `CREATE/DROP/ALTER` command is DDL. DML - alter the *information/data* within the schema; *without* updating the schema. This includes `DELETE` and `UPDATE` statements. Sometimes DDL and DML must be used together to correctly migrate a schema; but they are two distinct categories of SQL commands, and DML never causes the schema to be changed.
Why ALTER command is referred as DDL and not DML?
[ "", "sql", "ddl", "alter", "dml", "" ]
I want to get some data from my database for the past 31 days. When column 'finalized' is NULL I want to get data from column 'today', but if column 'finalized' isn't NULL I want to get data from column 'finalized'. I always want to get the last entered row. TABLE MyEarnings ``` id INT(11) AI date datetime today decimal(4,2) NULL finalized decimal(4,2) NULL id date today finalized ----------------------------------------------- 6 2016-02-04 04:52:00 0.39 NULL 5 2016-02-03 12:34:00 NULL 19.74 4 2016-02-03 12:33:00 15.96 NULL 3 2016-02-03 12:32:00 12.32 NULL 2 2016-02-02 15:16:00 NULL 9.16 1 2016-02-02 14:29:00 2.20 NULL ``` SQL ``` SELECT date, CASE WHEN finalized=NULL THEN today WHEN finalized!=NULL THEN finalized END AS earn FROM MyEarnings GROUP BY DATE_FORMAT(date, '%Y-%m-%d') ORDER BY date ASC LIMIT 0 , 31 ``` This is what I end up with ``` date earn --------------------------- 2016-02-02 00:00:00 NULL 2016-02-03 00:00:00 NULL 2016-02-04 00:00:00 NULL ``` What I'm looking to get ``` date earn ---------------------------- 2016-02-02 00:00:00 9.16 2016-02-03 00:00:00 19.74 2016-02-04 00:00:00 0.39 ``` # EDIT I also want to get a summary for each month of all values in 'finalized'-column with the max 'id' for each day.
You need conditional aggregation, but it is a bit tricky. I think this does what you want: ``` SELECT DATE_FORMAT(date, '%Y-%m-%d'), COALESCE(MAX(finalized), MAX(today)) as earn FROM MyEarnings GROUP BY DATE_FORMAT(date, '%Y-%m-%d') ORDER BY date ASC LIMIT 0 , 31; ``` This returns the maximum value of `today`. You may want the most recent value. If so, the simplest method is probably the `GROUP_CONCAT()`/`SUBSTRING_INDEX()` method: ``` SELECT DATE_FORMAT(date, '%Y-%m-%d'), COALESCE(MAX(finalized), SUBSTRING_INDEX(GROUP_CONCAT(today ORDER BY date DESC), ',', 1) + 0 ) as earn FROM MyEarnings GROUP BY DATE_FORMAT(date, '%Y-%m-%d') ORDER BY date ASC LIMIT 0 , 31; ``` It is a bit yucky to convert numbers to strings and back for this purpose. Alternative methods require additional joins or using variables.
use `CASE` `WHEN` like this: ``` CASE WHEN finalized IS NULL THEN today ELSE finalized END AS earn ``` Replace query: ``` SELECT date, (CASE WHEN finalized IS NULL THEN today ELSE finalized END; ) AS earn FROM MyEarnings GROUP BY DATE_FORMAT(date, '%Y-%m-%d') ORDER BY date ASC LIMIT 0,31 ```
Trouble with SELECT CASE
[ "", "mysql", "sql", "select", "case", "" ]
I am trying to setup an Offset and Limit, I have tried the following: ``` SELECT [Job_No] , ROW_NUMBER() OVER (ORDER BY [Job_No]) AS [rownumber] , [BaselineStart] , [BaselineFinish] , [ExpectedStart] , [ExpectedFinish] , [ScheduledStart] , [ScheduledFinish] , [ActualStart] , [ActualFinish] FROM [Schedule] WHERE [rownumber] BETWEEN 10 AND 20; ``` but I get this error: > Invalid column name 'rownumber'. Please help.
You should add another step to your query. `ROW_NUMBER()` can't be used in the same query in which you define it, so you have to wrap it all in another select like below. The where clause relates to all the fields in the `FROM` table/query, so rownumber doesn't exist yet. ``` select * from (SELECT Job_No, ROW_NUMBER() OVER(ORDER BY Job_No) AS rownumber, BaselineStart, BaselineFinish, ExpectedStart, ExpectedFinish, ScheduledStart, ScheduledFinish, ActualStart, ActualFinish FROM Schedule) WHERE rownumber BETWEEN 10 AND 20 ```
In SQL 2012+ you can use the new [OFFSET and FETCH NEXT](http://sqlmag.com/sql-server-2012/using-new-offset-and-fetch-next-options) arguments in the `ORDER BY` clause: ``` SELECT Columns FROM MyTable ORDER BY SomeColumn OFFSET 10 ROWS --this means start with row 11 FETCH NEXT 10 ROWS ONLY --this means limit the results to the next 10 rows. ``` So the above query will return rows 11 - 20, ordered by `SomeColumn`
SQL Server OFFSET and LIMIT
[ "", "sql", "sql-server", "" ]
My table in SQL is like:- ``` RN Name value1 value2 Timestamp 1 Mark 110 210 20160119 1 Mark 106 205 20160115 1 Mark 103 201 20160112 2 Steve 120 220 20151218 2 Steve 111 210 20151210 2 Steve 104 206 20151203 ``` Desired Output:- ``` RN Name value1Lag1 value1lag2 value2lag1 value2lag2 1 Mark 4 3 5 4 2 Steve 9 7 10 4 ``` The difference is calculated from the most recent to the second recent and then from second recent to the third recent for RN 1 value1lag1 = 110-106 =4 value1lag2 = 106-103 = 3 value2lag1 = 210-205 = 5 value2lag2 = 205-201 = 4 similarly for other RN's also. Note: For each RN there are 3 and only 3 rows. I have tried in several ways by taking help from similar posts but no luck.
I've assumed that RN and Name are linked here. It's a bit messy, but if each RN always has 3 values and you always want to check them in this order, then something like this should work. ``` SELECT t1.Name , AVG(CASE WHEN table_ranked.Rank = 1 THEN table_ranked.value1 ELSE NULL END) - AVG(CASE WHEN table_ranked.Rank = 2 THEN table_ranked.value1 ELSE NULL END) value1Lag1 , AVG(CASE WHEN table_ranked.Rank = 2 THEN table_ranked.value1 ELSE NULL END) - AVG(CASE WHEN table_ranked.Rank = 3 THEN table_ranked.value1 ELSE NULL END) value1Lag2 , AVG(CASE WHEN table_ranked.Rank = 1 THEN table_ranked.value2 ELSE NULL END) - AVG(CASE WHEN table_ranked.Rank = 2 THEN table_ranked.value2 ELSE NULL END) value2Lag1 , AVG(CASE WHEN table_ranked.Rank = 2 THEN table_ranked.value2 ELSE NULL END) - AVG(CASE WHEN table_ranked.Rank = 3 THEN table_ranked.value2 ELSE NULL END) value2Lag2 FROM table t1 INNER JOIN ( SELECT t1.Name , t1.value1 , t1.value2 , COUNT(t2.TimeStamp) Rank FROM table t1 INNER JOIN table t2 ON t2.name = t1.name AND t1.TimeStamp <= t2.TimeStamp GROUP BY t1.Name, t1.value1, t1.value2 ) table_ranked ON table_ranked.Name = t1.Name GROUP BY t1.Name ```
There are other answers here, but I think your problem is calling for [analytic functions](http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions004.htm#SQLRF06174), specifically [LAG()](http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions082.htm#SQLRF00652): ``` select rn, name, -- calculate the differences value1 - v1l1 value1lag1, v1l1 - v1l2 value1lag2, value2 - v2l1 value2lag1, v2l1 - v2l2 value2lag2 from ( select rn, name, value1, value2, timestamp, -- these two are the values from the row before this one ordered by timestamp (ascending) lag(value1) over(partition by rn, name order by timestamp asc) v1l1, lag(value2) over(partition by rn, name order by timestamp asc) v2l1 -- these two are the values from two rows before this one ordered by timestamp (ascending) lag(value1, 2) over(partition by rn, name order by timestamp asc) v1l2, lag(value2, 2) over(partition by rn, name order by timestamp asc) v2l2 from ( select 1 rn, 'Mark' name, 110 value1, 210 value2, '20160119' timestamp from dual union all select 1 rn, 'Mark' name, 106 value1, 205 value2, '20160115' timestamp from dual union all select 1 rn, 'Mark' name, 103 value1, 201 value2, '20160112' timestamp from dual union all select 2 rn, 'Steve' name, 120 value1, 220 value2, '20151218' timestamp from dual union all select 2 rn, 'Steve' name, 111 value1, 210 value2, '20151210' timestamp from dual union all select 2 rn, 'Steve' name, 104 value1, 206 value2, '20151203' timestamp from dual ) data ) where -- return only the rows that have defined values v1l1 is not null and v1l2 is not null and v2l1 is not null and v2l1 is not null ``` This approach has the benefit that Oracle does all the necessary buffering internally, avoiding self-joins and the like. For big data sets this can be important from a performance viewpoint. As an example, the explain plan for that query would be something like ``` ------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 6 | 150 | 13 (8)| 00:00:01 | |* 1 | VIEW | | 6 | 150 | 13 (8)| 00:00:01 | | 2 | WINDOW SORT | | 6 | 138 | 13 (8)| 00:00:01 | | 3 | VIEW | | 6 | 138 | 12 (0)| 00:00:01 | | 4 | UNION-ALL | | | | | | | 5 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 | | 6 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 | | 7 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 | | 8 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 | | 9 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 | | 10 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 | ------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("V1L1" IS NOT NULL AND "V1L2" IS NOT NULL AND "V2L1" IS ``` Note that there are no joins, just a WINDOW SORT that buffers the necessary data from the "data source" (in our case, the VIEW 3 that is the UNION ALL of our SELECT ... FROM DUAL) to partition and calculate the different lags.
Difference between the values of multiple rows in SQL
[ "", "sql", "oracle", "difference", "window-functions", "oracle-analytics", "" ]
I have **ONLY 1** table called `Meeting` that stores all meeting requests. This table can be **EMPTY**. It has several columns including **requestType** (which can only be "MT") **meetingStatus** (can only be either `pending`, `approved`, `denied` or `canceled`) and **meetingCreatedTime** I want to count how many requests of each status's type (in other words how many requests are pending, how many are approved, denied and canceled) for the last 30 days Problem is that if there is no request then nothing display but I want to display `0`, how do I do it? Here is my query now: ``` SELECT [requestType], ( SELECT COUNT ([requestType]) FROM [Meeting] WHERE CAST([meetingCreatedTime] AS DATE) >= CAST(DateAdd(DAY,-30,Getdate()) AS DATE) AND [meetingStatus] = 'Approved') As 'Approved', ( SELECT COUNT ([requestType]) FROM [Meeting] WHERE CAST([meetingCreatedTime] AS DATE) >= CAST(DateAdd(DAY,-30,Getdate()) AS DATE) AND [meetingStatus] = 'Pending') As 'Pending', ( SELECT COUNT ([requestType]) FROM [Meeting] WHERE CAST([meetingCreatedTime] AS DATE) >= CAST(DateAdd(DAY,-30,Getdate()) AS DATE) AND [meetingStatus] = 'Canceled') As 'Canceled', ( SELECT COUNT ([requestType]) FROM [Meeting] WHERE CAST([meetingCreatedTime] AS DATE) >= CAST(DateAdd(DAY,-30,Getdate()) AS DATE) AND [meetingStatus] = 'Denied') As 'Denied' FROM [Meeting] WHERE CAST([meetingCreatedTime] AS DATE) >= CAST(DateAdd(DAY,-30,Getdate()) AS DATE) GROUP BY [requestType] ``` Result: [![enter image description here](https://i.stack.imgur.com/WXEjM.png)](https://i.stack.imgur.com/WXEjM.png) What I want is: [![enter image description here](https://i.stack.imgur.com/CbifW.png)](https://i.stack.imgur.com/CbifW.png)
``` SELECT RT.requestType, SUM(CASE WHEN M.meetingStatus = 'Approved' THEN 1 ELSE 0 END) AS Approved, SUM(CASE WHEN M.meetingStatus = 'Pending' THEN 1 ELSE 0 END) AS Pending, SUM(CASE WHEN M.meetingStatus = 'Canceled' THEN 1 ELSE 0 END) AS Canceled, SUM(CASE WHEN M.meetingStatus = 'Denied' THEN 1 ELSE 0 END) AS Denied, FROM (SELECT DISTINCT requestType FROM Meeting) RT LEFT OUTER JOIN Meeting M ON M.requestType = RT.requestType AND M.meetingCreatedTime >= DATEADD(DAY, -30, GETDATE()) GROUP BY RT.requestType ``` The `SUM`s are a much clearer (IMO) and much more efficient way of getting the counts that you need. Using the requestType table (assuming that you have one) lets you get results for every request type even if there are no meetings of that type in the date range. The `LEFT OUTER JOIN` to the meeting table allows the request type to still show up even if there are no meetings for that time period. All of your `CAST`s between date values seem unnecessary.
This is one possible approach to force one line to be visible in any case. Adapt this to your needs... Copy it into an empty query window and execute... play around with the WHERE part... ``` DECLARE @Test TABLE (ID INT IDENTITY, GroupingKey VARCHAR(100)); INSERT INTO @Test VALUES ('a'),('a'),('b'); SELECT TOP 1 tbl.CountOfA ,tbl.CountOfB ,tbl.CountOfC FROM ( SELECT 1 AS Marker ,(SELECT COUNT(*) FROM @Test WHERE GroupingKey='a') AS CountOfA ,(SELECT COUNT(*) FROM @Test WHERE GroupingKey='b') AS CountOfB ,(SELECT COUNT(*) FROM @Test WHERE GroupingKey='c') AS CountOfC WHERE (1=1) --play here with (1=0) and (1=1) UNION ALL SELECT 2,0,0,0 ) AS tbl ORDER BY Marker ```
How to show 0 value using COUNT and SELECTon a SQL query
[ "", "sql", "sql-server", "count", "" ]
I have a table like this: ``` id unit 1 mm 2 cm 3 kg ``` When I perform pivot operation on this, I am getting result as follows: ``` 1 2 3 mm cm kg ``` Is it possible to get custom column names here, something like this: ``` d1 d2 d3 mm cm kg ``` I am using Pivot for this as: ``` IF OBJECT_ID('tempdb..#t') IS NOT NULL DROP TABLE #t GO CREATE table #t (id varchar(max),unit varchar(max)) insert into #t (id,unit)values (1,'kg'), (2,'cm'), (3,'mm'), (4,'m') DECLARE @statement NVARCHAR(max) ,@columns NVARCHAR(max) SELECT @columns = ISNULL(@columns + ',', '') + N'[' + cast(tbl.id as varchar(max)) + ']' FROM ( SELECT DISTINCT id FROM #t ) AS tbl SELECT @statement = 'select * INTO ##temp from ( SELECT id,[unit] FROM #t ) as s PIVOT (max(unit) FOR id in(' + @columns + ')) as pvt ' EXEC sp_executesql @statement = @statement SELECT * FROM ##temp DROP TABLE #t DROP TABLE ##temp ``` Is it possible? Thanks
``` IF OBJECT_ID('tempdb..#t') IS NOT NULL DROP TABLE #t GO CREATE TABLE #t ( id VARCHAR(10), unit VARCHAR(100) ) INSERT INTO #t (id, unit) VALUES ('1', 'kg'), ('2', 'cm'), ('3', 'mm'), ('4', 'mm') DECLARE @SQL NVARCHAR(MAX), @columns NVARCHAR(MAX) SELECT @columns = STUFF(( SELECT ',[D' + id + ']' FROM #t FOR XML PATH('')), 1, 1, '') SELECT @SQL = ' SELECT * FROM ( SELECT [unit], col = N''D'' + id FROM #t ) s PIVOT (MAX(unit) FOR col IN (' + @columns + ')) p' EXEC sys.sp_executesql @SQL ```
Just add a prefix to your ID. Example ``` SELECT @statement = 'select * INTO ##temp from ( SELECT [id] = ''d''+id,[unit] FROM #t ) as s PIVOT (max(unit) FOR id in(' + @columns + ')) as pvt ' ``` Also it's terrible practice to use global temp tables! Especially one named ##temp
How to provide custom name to column in pivoting
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "pivot", "" ]
I have two tables, say `Table1` and `Table2` **Table1** ``` ╔════╦════╗ β•‘ ID β•‘ RN β•‘ ╠════╬════╣ β•‘ 11 β•‘ 1 β•‘ β•‘ 12 β•‘ 2 β•‘ β•‘ 13 β•‘ 3 β•‘ β•‘ 14 β•‘ 4 β•‘ β•‘ 15 β•‘ 5 β•‘ β•‘ 16 β•‘ 6 β•‘ β•‘ 17 β•‘ 7 β•‘ β•‘ 18 β•‘ 8 β•‘ β•‘ 19 β•‘ 9 β•‘ β•‘ 10 β•‘ 10 β•‘ β•šβ•β•β•β•β•©β•β•β•β•β• ``` **Table2** ``` ╔════╦════════╦══════╗ β•‘ ID β•‘ FromRN β•‘ ToRN β•‘ ╠════╬════════╬══════╣ β•‘ 1 β•‘ 1 β•‘ 3 β•‘ β•‘ 2 β•‘ 6 β•‘ 8 β•‘ β•‘ 3 β•‘ 10 β•‘ 10 β•‘ β•šβ•β•β•β•β•©β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β• ``` I want all those records from `Table1` whose `RN` lies between any range between `FromRN` and `ToRN` in `Table2` So my expected output is: ``` ╔════╦════╗ β•‘ ID β•‘ RN β•‘ ╠════╬════╣ β•‘ 11 β•‘ 1 β•‘ β•‘ 12 β•‘ 2 β•‘ β•‘ 13 β•‘ 3 β•‘ β•‘ 16 β•‘ 6 β•‘ β•‘ 17 β•‘ 7 β•‘ β•‘ 18 β•‘ 8 β•‘ β•‘ 10 β•‘ 10 β•‘ β•šβ•β•β•β•β•©β•β•β•β•β• ``` SQLFiddle to create schema can be found here: <http://sqlfiddle.com/#!3/90d50>
You can do an `INNER JOIN` of the two tables to filter out those records from `Table1` whose `RN` values do not fall into any range in `Table2`: ``` SELECT t1.ID, t1.RN FROM Table1 t1 INNER JOIN Table2 t2 ON t1.RN >= t2.FromRN AND t1.RN <= t2.ToRN ``` Follow the link below for a running demo (courtesy of the OP): [**SQLFiddle**](http://sqlfiddle.com/#!3/90d50/9)
You can try this: ``` select t1.Id, t1.RN from table1 t1 inner join table2 t2 on t1.Rn between t2.FromRn and t2.ToRn ```
Select Records between Range from another table
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I'm exectuting command on large table. It has around 7 millons rows. Command is like this: ``` select * from mytable; ``` Now I'm restriting the number of rows to around 3 millons. I'm using this command: ``` select * from mytable where timest > add_months( sysdate, -12*4 ) ``` I have an index on timest column. But the costs are almost same. I would expect they will decrease. What am I doing wrong? Any clue? Thank you in advance! Here explain plans: [![enter image description here](https://i.stack.imgur.com/tCGDY.gif)](https://i.stack.imgur.com/tCGDY.gif) [![enter image description here](https://i.stack.imgur.com/gZnke.gif)](https://i.stack.imgur.com/gZnke.gif)
using an index for 3 out of 7 mio. of rows would most probably be even more expensive, so oracle makes a full table scan for both queries, which is IMO correct. You may try to do parallel FTS (Full Table Scan) - it should be faster, BUT it will put your Oracle server under higher load, so don't do it on heavy loaded multiuser DBs. Here is an example: ``` select /*+full(t) parallel(t,4)*/ * from mytable t where timest > add_months( sysdate, -12*4 ); ```
To select a *very small number of records from a table use index*. To select a **non-trivial part use partitioning**. In your case an effective acces would be enabled with range partitioning on `timest` column. The big advantage is that only relevant partitions are accessed. Here an exammple ``` create table test(ts date, s varchar2(4000)) PARTITION BY RANGE (ts) (PARTITION t1p1 VALUES LESS THAN (TO_DATE('2010-01-01', 'YYYY-MM-DD')), PARTITION t1p2 VALUES LESS THAN (TO_DATE('2015-01-01', 'YYYY-MM-DD')), PARTITION t1p4 VALUES LESS THAN (MAXVALUE) ); ``` Query ``` select * from test where ts < to_date('2009-01-01','yyyy-mm-dd'); ``` will access only the paartition 1, i.e. only before '2010-01-01'. See pstart an dpstop in execution plan ``` ----------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | ----------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 5 | 10055 | 9 (0)| 00:00:01 | | | | 1 | PARTITION RANGE SINGLE| | 5 | 10055 | 9 (0)| 00:00:01 | 1 | 1 | |* 2 | TABLE ACCESS FULL | TEST | 5 | 10055 | 9 (0)| 00:00:01 | 1 | 1 | ----------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter("TS"<TO_DATE(' 2009-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) ```
Performance of a sql query
[ "", "sql", "oracle", "performance", "oracle11g", "" ]
I have this 2 tables ``` Items ID Type ClientID ======================== 1 0 123 2 0 123 Texts ItemID Language Text ================================ 1 Eng Hi there! 1 Spa Holla! 2 Eng bye! 2 Spa bye in Spanish! ``` In my final result I wat the SQL to return me this table ``` ID Type Eng Spa ================================ 1 0 Hi there! Holla! 2 0 Bye! bye in Spanish! ``` I tried to create this statement: ``` SELECT DISTINCT I.ID ,I.Type, (SELECT T.Text WHERE D.Language='Eng') AS 'Eng', (SELECT T.Text WHERE D.Language='Spa') AS 'Spa' FROM Items I INNER JOIN Texts T ON I.ID=T.ItemID ``` but i get this result: ``` ID Type Eng Spa ================================ 1 0 Hi there! NULL 1 0 NULL Holla! 2 0 Bye! NULL 2 0 NULL bye in Spanish! ```
I don't see why a join is necessary. You can just use conditional aggregation: ``` select t.itemid, max(case when t.language = 'Eng' then t.text end) as Eng, max(case when t.language = 'Spa' then t.text end) as Spa from texts t group by t.itemid; ```
Use nesting to make two joins on same table with a filter (where clause) . Below i have tested on MySQL ``` SELECT i.id, eng_table.text AS eng, spa_table.text AS spa FROM i LEFT OUTER JOIN (SELECT ItemID AS ID, Text FROM t WHERE Language = 'ENG') AS eng_table ON i.id = eng_table.id LEFT OUTER JOIN (SELECT ItemID AS ID, Text FROM t WHERE Language = 'SPA') AS spa_table ON i.id = spa_table.id ``` Regards, Bikxs
SQL combine relationship table as separate columns
[ "", "sql", "sql-server", "" ]
Like the title says, I am trying to do a SELECT statement where I am selecting 1 specific record (the one the person searched for) but I also want to get 5 records (the 5 that come before and after the specific one) to the right and left of the specific searched record, while keeping the current/focused record on the searched one. The records ID's (where they search for) consist out of date values combined with numbers. So for example in this month the first created record looks like this `1602030001` `<-- Year(16), month (02), day (03) and first record (0001)`. So the second one they create this month will be `1602030002` and so on. So if they search for record 16020050 (50th record in February) I want to show records 16020044 to 16020055. I already did something like this but it has a downside: Get the searched record (16020050) and in VBA set a max and min variable (int) to do -5 and + 5 to the searched record number. Then use these variables to do a SELECT like: `SELECT * FROM SearchRecord WHERE ID > MinVariable (16020044) AND ID < MaxVariable (16020055)` However, the downside to this is that when the month February for example has only 500 records so 16020500 and then march started (16030001) and the person is searching for the 500th record on February it would also search for records that don't exist (up to 505 while that 5 are already in March --> 16030005). I don't know a more effective/smarter way to get those 5 extra record to each side and also maintain logic that if you search for the last record in a specific month that the 5 records to the right will start in the next month. I hope it makes sense;)
For a table named [MyTable] ``` ID -------- 16020493 16020494 16020495 16020496 16020497 16020498 16020499 16020500 16030001 16030002 16030003 16030004 16030005 16030006 16030007 ``` we can get the starting ID of the range centred around ID=16020500 by using the query ``` SELECT MIN(ID) AS StartID FROM ( SELECT TOP 6 ID FROM MyTable WHERE ID <= 16020500 ORDER BY ID DESC ) ``` which returns ``` StartID -------- 16020495 ``` and we can simply use that query as a subquery in the WHERE clause of a query to return the 11 rows (5 before + the main one + 5 after): ``` SELECT TOP 11 * FROM MyTable WHERE ID >= ( SELECT MIN(ID) AS StartID FROM ( SELECT TOP 6 ID FROM MyTable WHERE ID <= 16020500 ORDER BY ID DESC ) ) ORDER BY ID ``` returning ``` ID -------- 16020495 16020496 16020497 16020498 16020499 16020500 16030001 16030002 16030003 16030004 16030005 ```
I have to imagine this is won't be fast though it is a fairly elegant way to think about the problem. This was more common when `top` didn't exist. The initial filter should help restrict the search to a window of ids around the previous two months (assuming there are always at least five records in a month). ``` select * from SearchRecord sr1 where id between [Id:] / 10000 * 10000 - 9999 and [Id:] / 10000 * 10000 + 10005 and (select count(*) from SearchRecord sr2 where sr2.id between sr1.id and [Id:]) <= 6 and (select count(*) from SearchRecord sr3 where sr3.id between [Id:] and sr1.id) <= 6 ```
Is there a (effective) way to get 5 records to the left and right of a specific record with Access SQL?
[ "", "sql", "vba", "ms-access", "" ]
I have a table that contains many rows such as: ``` id file ----------------------------------- 1 7-Zip Portable 4.42 2 7-Zip Portable 4.42 Rev 2 3 7-Zip Portable 4.57 4 7-Zip Portable 4.62 5 7-Zip Portable 4.64 6 SeaMonkey Portable 2.35 7 SeaMonkey Portable 3 8 SeaMonkey Portable 4.11 ``` I want related rows when search for "7-Zip Portable 4.42" ``` id file ----------------------------------- 1 7-Zip Portable 4.42 2 7-Zip Portable 4.42 Rev 2 3 7-Zip Portable 4.57 4 7-Zip Portable 4.62 5 7-Zip Portable 4.64 ``` How can I do it in sql ?
I got an answer ``` SELECT * FROM `post` WHERE `title` LIKE '%7-Zip%' OR '%Portable%' OR '%4.42%' ```
You could use `LIKE` ``` SELECT * FROM table_name WHERE file LIKE '7-Zip Portable%' ```
How to find related rows in sql
[ "", "sql", "" ]
[![enter image description here](https://i.stack.imgur.com/q6UiH.png)](https://i.stack.imgur.com/q6UiH.png) I want all item\_nos not present in location 100, THat is 1003,1005,1006,1007. What would be the query? WHat I was trying ``` Select distinct item_no from location where location_no not in (100) ``` But this returns 1002 as well since it has another location 200.
You can use `group by` and `having` for this: ``` select l.item_no from location l group by l.item_no having sum(case when l.location = 100 then 1 else 0 end) = 0; ``` The `having` clause counts the number of times each item is in location 100. The `= 0` specifies that it is never in that location. Assuming you have an `items` table, you can also do: ``` select i.* from items i where not exists (select 1 from location l where l.item_no = i.item_no and l.location = 100 ); ``` With an index on `location(item_no, location)` this should be more efficient.
You can also use NOT IN statement to check what are the item\_no that have location = 100 and select everyone besides them ``` SELECT distinct t.item_no from Location t WHERE t.item_no NOT IN(select distinct s.item_no from Location s where s.location_no = 100) ```
Oracle Unique items not in a particular location
[ "", "sql", "oracle", "oracle11g", "" ]
The relevant part of my database schema looks like this (Ruby on Rails migration code, but should be easy to read): ``` create_table "team_memberships" do |t| t.integer "team_id" t.integer "user_id" end create_table "users" do |t| t.integer "id" t.string "slug" end create_table "performance_points" do |t| t.integer "id" t.integer "user_id", t.date "date", t.integer "points", t.integer "team_id" end ``` I want a query that returns a list of users sorted by the total amount of performance points they have received since a certain date. Note that one "performance\_points" row does not equal one point, we need to sum the "points" The query I have so far looks like this: ``` SELECT u.id, u.slug, SUM(pp.points) AS total FROM users u JOIN performance_points pp ON pp.user_id = u.id JOIN team_memberships tm ON tm.team_id = pp.team_id AND tm.user_id = pp.user_id WHERE (pp.date > '2015-08-02 13:57:14.042221') GROUP BY pp.id, u.id ORDER BY total DESC LIMIT 50 ``` The first three results are: ``` "id","slug","total" 32369,"andreas-jensen-9de10dec-0f88-427f-b135-62cebea611c8",245 23752,"kenneth-kjaerstad",95 34179,"marius-mork-rydal",93 ``` To check that results are correct I count the points for each user. However the second one seems to be wrong. I run this query with Kenneth's id: ``` SELECT SUM(performance_points.points) FROM performance_points WHERE performance_points.user_id = 23752 AND (date > '2015-08-02 13:57:14.042221') ``` I get: `84`. Looking at all Kenneth's performance points with: ``` SELECT performance_points.points FROM performance_points WHERE performance_points.user_id = 23752 AND (date > '2015-08-02 13:57:14.042221') ``` We get: ``` "points" -10 1 -2 95 ``` -10 + 1 - 2 + 95 is indeed 84 so I dunno whats going on with the first query. Why is the total 95? I'm running PostgreSQL version 9.3.5
I have discovered that there actually wasn't a problem with the query, but with the data. There were some users who were on multiple teams more than once and that have issues.
If `slug` is unique per user: ``` SELECT u.id, u.slug, SUM(pp.points) AS total FROM users u JOIN performance_points pp ON u.id = pp.user_id WHERE pp.date > '2015-08-02 13:57:14.042221' GROUP BY u.id, u.slug ORDER BY total DESC LIMIT 50 ``` Otherwise you can't `SELECT` `slug` because it's not a grouping column, so there are multiple values of it in each group. You want to `GROUP BY user_id` in `performance_points` to get `total` per `user_id` then `JOIN` with `users` to get `slug`s. ``` SELECT id, slug, total FROM users JOIN ( SELECT user_id, SUM(points) AS total FROM performance_points WHERE date > '2015-08-02 13:57:14.042221' GROUP BY user_id) t ON id = user_id ORDER BY total DESC LIMIT 50 ``` (It's not clear why you are `JOIN`ing with `team_membership`. Presumably `performance_points (user_id,team_id)` is a foreign key into it, ie all such pairs are already in it.)
Issues with SQL query using SUM
[ "", "sql", "postgresql", "" ]
I have table like this below where it save all data with reference id :- ``` ID Name RefID A AAAA null B BBBB A C CCCC B D DDDD C E EEEE D F FFFF E ``` Firstly I need some query where I can scan which are the top three reference id level only and show up. --- Example: When D added below C --- ``` GLevel ID RefID No Start A (3) Level(1) B (A) (2) Level(2) C (B) (1) Level(3) D (C) ``` Result (Down To Top): A,B and C are top three reference id for D Result (Top to Down): A have B,C and D as child based refid When E has enter below D, then A are not valid reference id for E. I need to show top three reference id for E. --- Example: When E added below D --- ``` GLevel Id RefID No Start A Level(1) B (A) (3) Level(2) C (B) (2) Level(3) D (C) (1) Level(4) E (D) ``` Result (Down To Top): D,C and B are top three reference id for E Result (Top to Down): B have C,D and E as child based refid As a final result I need to rearrange and store my compute view for each ID till Group Level 3 only like below:- ``` id GroupLevel RefID A Level(1) B A Level(2) C A Level(3) D B Level(1) C B Level(2) D B Level(3) E C Level(1) D C Level(2) E D Level(1) E E null ``` Who have cross this problem please help me how to solve it. Anyway thanks on advance for reading this question.
Postgreql version (|| instead of '+' for concatenation and quoted level alias): ``` select ID, 'Level(' || cast(level as varchar(255)) || ')', RefID from ( select t0.ID, case level.level when 1 then t1.ID when 2 then t2.ID when 3 then t3.ID end RefID, level.level from Test t0 left outer join Test t1 on t1.RefID = t0.ID left outer join Test t2 on t2.RefID = t1.ID left outer join Test t3 on t3.RefID = t2.ID cross join ( select 1 "level" union all select 2 union all select 3 ) level ) t where t.RefID is not null ```
``` create table Test( ID varchar(10), Name varchar(10), RefID varchar(10) ) insert into Test select 'A','AAAA',null union all select 'B','BBBB','A' union all select 'C','CCCC','B' union all select 'D','DDDD','C' union all select 'E','EEEE','D' --union all --select 'F','FFFF','E' select ID, 'Level(' + cast(level as varchar(255)) + ')' GroupLevel, RefID from ( select t0.ID, case level.level when 1 then t1.ID when 2 then t2.ID when 3 then t3.ID end RefID, level.level from Test t0 left outer join Test t1 on t1.RefID = t0.ID left outer join Test t2 on t2.RefID = t1.ID left outer join Test t3 on t3.RefID = t2.ID cross join ( select 1 level union all select 2 union all select 3 ) level ) t where t.RefID is not null ``` Output: ``` ID | GroupLevel | RefID | ---+------------+-------+ A | Level(1) | B | A | Level(2) | C | A | Level(3) | D | B | Level(1) | C | B | Level(2) | D | B | Level(3) | E | C | Level(1) | D | C | Level(2) | E | D | Level(1) | E | ``` Not sure that I understand what should happen with F. Also, can't see any logic behind 'E null' row in your desired output
How To Query Top Three Reference ID only and How I can query until 3 Level Group below only Based Reference ID
[ "", "sql", "postgresql", "relationship", "" ]
The original format: ``` 123-456-7890 ``` My goal format: ``` (123)456-7890 ``` I wanted to go the route of concatenating between substrings but I continually get flagged for errors. I am unsure of a better method to go about implementing a way to format. My query: ``` select || '(' || substr(telephone,0, 3)|| ')' || substr(telephone,4, 3)|| ' '|| substr(telephone,7, 4)) as telephone, from book; ``` My current error: ``` "missing expression" ```
You have an extra dangling parenthesis at the end of your `SELECT`, and you also have a dangling concatenation operator `||` in the front. Try this: ``` SELECT '(' || SUBSTR(telephone, 0, 3) || ')' || SUBSTR(telephone, 4, 3) || ' ' || SUBSTR(telephone, 7, 4) AS telephone FROM book ``` **Update:** You should really use this query, because it turns out you also had a problem with forming your desired output as well: ``` SELECT '(' || SUBSTR(telephone, 1, 3) || ')' || SUBSTR(telephone, 5, 8) AS telephone FROM book ```
Using **SUBSTR**: ``` SQL> WITH sample_data AS( 2 SELECT '123-456-7890' num FROM dual 3 ) 4 -- end of sample_data mimicking real table 5 SELECT num, 6 '(' 7 || SUBSTR(num, 1, 3) 8 || 9 ')' 10 || SUBSTR(num, 5, 8) AS my_num 11 FROM sample_data; NUM MY_NUM ------------ --------------- 123-456-7890 (123)456-7890 SQL> ``` Remember, the **index** for **SUBSTR** starts from `1`. It is bad practice to use `0` as starting index. You could also do it using **REGEXP\_REPLACE**. Pattern: `(\d{3})(-)(\d{3})(-)(\d{4})` Expression: `regexp_replace(num, '(\d{3})(-)(\d{3})(-)(\d{4})', '(\1)\3\4\5')` For example, ``` SQL> WITH sample_data AS( 2 SELECT '123-456-7890' num FROM dual 3 ) 4 -- end of sample_data mimicking real table 5 SELECT num, 6 regexp_replace(num, '(\d{3})(-)(\d{3})(-)(\d{4})', '(\1)\3\4\5') my_num 7 FROM sample_data; NUM MY_NUM ------------ --------------- 123-456-7890 (123)456-7890 SQL> ```
Oracle SQL inserting parenthesis into phone number
[ "", "sql", "oracle", "formatting", "substring", "concatenation", "" ]
I am looking to reconcile data from 2 different tables where I need to carry out a concatenation and substr to create columns that I can use to carry out a match against.The following separate queries reflect the select statements from each table that produces the matching values that reflect `sitenn.zonenn` (e.g. `site12.zone20`) as nodename. ``` SELECT distinct(REGEXP_SUBSTR(B.NODE_NAME,'*site*.*')) as nodename FROM OPC_ACT_MESSAGES A,OPC_NODE_NAMES B WHERE A.MESSAGE_GROUP = 'Ebts_Status_Alarms' AND A.SEVERITY <> 2 AND A.NODE_ID = B.NODE_ID; SELECT 'site'||site_id||'.zone'||zone_id as nodename FROM aw_active_alarms GROUP BY site_id,zone_id; ``` I need to write a query that select all nodenames from one table that do not exist in the other.
Use `left join` to find it. It is faster than `minus`,`not in`,`not exists` etc. ``` SELECT a.nodename FROM (SELECT DISTINCT( regexp_substr(B.node_name, '*site*.*') ) AS nodename FROM opc_act_messages A, opc_node_names B WHERE A.message_group = 'Ebts_Status_Alarms' AND A.severity <> 2 AND A.node_id = B.node_id ) a LEFT JOIN (SELECT 'site' || site_id || '.zone' || zone_id AS nodename FROM aw_active_alarms GROUP BY site_id, zone_id ) b ON a.nodename = b.nodename WHERE b.nodename IS NULL ```
One simple way: use `MINUS` ``` SELECT distinct(REGEXP_SUBSTR(B.NODE_NAME,'*site*.*')) as nodename FROM OPC_ACT_MESSAGES A,OPC_NODE_NAMES B WHERE A.MESSAGE_GROUP = 'Ebts_Status_Alarms' AND A.SEVERITY <> 2 AND A.NODE_ID = B.NODE_ID MINUS SELECT 'site'||site_id||'.zone'||zone_id as nodename FROM aw_active_alarms GROUP BY site_id,zone_id; ```
Oracle sql join against extracted values
[ "", "sql", "oracle", "" ]
Using PIVOT/UNPIVOT I was able to get from here... ``` Period Store1 Store2 -------------------------------------- Jan15 123 456 Feb15 789 101 Mar15 112 131 Apr15 415 161 ``` ...to here, dynamically (no matter how many stores my current script successfully PIVOTs to the below set... ``` Store Jan15 Feb15 Mar15 Apr15 --------------------------------------------------------------- Store1 123 789 112 415 Store2 456 101 131 161 ``` using this script: ``` DECLARE @colsUnpivot AS NVARCHAR(MAX), @colsPivot as NVARCHAR(MAX), @query AS NVARCHAR(MAX) /* @colsUnpivot gets list of Stores */ select @colsUnpivot = COALESCE(@colsUnpivot +', ', '') + QUOTENAME(A.name) from (select name from sys.columns where object_id = object_id('mytable') and name <> 'Period') A /* @colsPivot gets list of Periods */ select @colsPivot = COALESCE(@colsPivot +', ', '') + QUOTENAME(B.Period) from (select distinct period from StoreMetrics) B set @query = 'select store, '+@colsPivot+' from ( select period, store, value from mytable unpivot ( value for store in ('+@colsUnpivot+') ) unpiv ) src pivot ( max(value) for period in ('+@colsPivot+') ) piv' exec(@query) ``` ...however, this is the actual expected result set: ``` Store Period Value -------------------------------------- Store1 Jan15 123 Store1 Feb15 789 Store1 Mar15 112 Store1 Apr15 415 Store2 Jan15 456 Store2 Feb15 101 Store2 Mar15 131 Store2 Apr15 161 ``` Either from the original dataset or my first pivot result, how can I dynamically collapse (has to be dynamic since period entries will keep changing) all columns into row entries for each combination of element/period?
One possible solution, without pivot/unpivot: ``` create table #tab( Store nvarchar(50), period nvarchar(50), value int ) declare @ColumnName table(columnName nvarchar(50)) insert into @ColumnName (columnName) select A.name from ( select name from sys.columns where object_id = object_id('mytable') and name <> 'Period' ) A declare @Column nvarchar(50), @sql nvarchar(4000) while (select count(*) from @ColumnName) > 0 begin set @Column = (select top 1 columnName from @ColumnName); set @sql = ' insert into #tab select ''' + @Column + ''',Period , sum(' + @Column + ') from mytable group by ' + @Column + ',Period' exec(@sql); delete from @ColumnName where columnName = @Column end select * from #tab drop table #tab ```
Easy way of doing it would just be to union your original: ``` DECLARE @t AS TABLE(period VARCHAR(15), store1 INT, store2 INT) INSERT INTO @t ( [period], [store1], [store2] ) VALUES ('Jan15',123,456), ('Feb15',789,101), ('Mar15',112,131) SELECT T1.[period], 'Store1' AS Store, [T1].[store1] AS Value FROM @t AS T1 UNION SELECT T1.[period], 'Store2', [T1].[store2] FROM @t AS T1 ORDER BY Store ``` Gives: ``` period Store Value Feb15 Store1 789 Jan15 Store1 123 Mar15 Store1 112 Feb15 Store2 101 Jan15 Store2 456 Mar15 Store2 131 ``` And then it's just a matter of sorting your result which you can do by adding an integer sortkey based on Period which you can sort by.
TSQL Pivot - Collapsing Columns to Rows
[ "", "sql", "sql-server", "t-sql", "pivot", "unpivot", "" ]
I have a table with effective start date and end date that is history data for employees. This table should contain all the details of th employee. I need to find out through a query wherever there are breaks between dates. For example : ``` Table ABC : EMP_NO EFF_START_DATE EFF_END_DATE ORG NAME DOB STATUS 1 01-JAN-2010 28-MAR-2010 XYZ SMITH 10-JAN-1990 SINGLE 1 29-MAR-2010 29-AUG-2010 XYZ SMITH 10-JAN-1990 MARRIED 1 20-OCT-2010 31-DEC-4712 XYZ SMITH 10-JAN-1990 DIVORCEE 2 04-FEB-2010 28-MAR-2010 XYZ JOHN 10-JAN-1991 SINGLE 2 29-MAR-2010 31-DEC-4712 XYZ JOHN 10-JAN-1991 MARRIED 3 02-FEB-2010 21-MAR-2010 XYZ GEETA 10-JAN-1991 SINGLE 3 29-MAR-2010 31-DEC-4712 XYZ GEETA 10-JAN-1991 MARRIED ``` Now for EMP NO 1 and 3 there is a gap. For example for emp no 1 after 29-aug-2010 and before 20-oct-2010 there should have been a record. similarly in emp no. 3 there should have been a records betwwen 21-mar-2010 to 29-mar-2010. what query can i write for this
You can also find the solution by comparing, for each row, the start\_date with the previous end\_date for the same employee: ``` select src.*, src.start_date - src.prev_end_date gap_days from ( select out_tab.emp_no, (select max(in_tab.end_date) from ABC in_tab where in_tab.emp_no = out_tab.emp_no and in_tab.end_date < out_tab.start_date) prev_end_date, out_tab.start_date from ABC out_tab ) src where start_date - prev_end_date > 1; EMP_NO PREV_END_DATE START_DATE GAP_DAYS ---------- ------------- ---------- ---------- 1 29-AUG-10 20-OCT-10 52 3 21-MAR-10 29-MAR-10 8 ```
It is a typical **[Gaps and Islands](https://lalitkumarb.com/category/gaps-and-islands/)** problem. You need to find out the **missing values in the sequence of dates**. For example, ``` SQL> WITH sample_data(dates) AS( 2 SELECT DATE '2015-01-01' FROM dual UNION 3 SELECT DATE '2015-01-02' FROM dual UNION 4 SELECT DATE '2015-01-03' FROM dual UNION 5 SELECT DATE '2015-01-05' FROM dual UNION 6 SELECT DATE '2015-01-06' FROM dual UNION 7 SELECT DATE '2015-01-07' FROM dual UNION 8 SELECT DATE '2015-01-10' FROM dual UNION 9 SELECT DATE '2015-01-11' FROM dual UNION 10 SELECT DATE '2015-01-12' FROM dual UNION 11 SELECT DATE '2015-01-13' FROM dual UNION 12 SELECT DATE '2015-01-20' FROM dual 13 ) 14 -- end of sample_data mimicking real table 15 SELECT MIN(missing_dates), 16 MAX(missing_dates) 17 FROM 18 (SELECT missing_dates, 19 missing_dates - row_number() OVER(ORDER BY missing_dates) rn 20 FROM 21 (SELECT min_date - 1 + LEVEL missing_dates 22 FROM 23 ( SELECT MIN(dates) min_date , MAX(dates) max_date FROM sample_data 24 ) 25 CONNECT BY level <= max_date - min_date + 1 26 MINUS 27 SELECT dates FROM sample_data 28 ) ) 29 GROUP BY rn ORDER BY rn; MIN(MISSING_DATES) MAX(MISSING_DATES) ------------------ ------------------ 2015-01-04 2015-01-04 2015-01-08 2015-01-09 2015-01-14 2015-01-19 ``` --- **NOTE** The **WITH** clause is only for building sample data for demonstration **because you did not provide the create and insert statements**. In reality you need to use your own **table\_name** instead of **sample\_data**.
QUERY to find gap in dates in a table
[ "", "sql", "oracle", "gaps-and-islands", "" ]
I have a table `dictionary` which contains a list of words Like: ``` ID|word --------- 1|hello 2|google 3|similar ... ``` so i want if somebody writes a text like "helo iam looking for simlar engines for gogle". Now I want to check every word if it exists in the database, if not it should get me the similar word for the word. For example: `helo = hello, simlar = similar, gogle = google`. Well, i want to fix the spelling errors. In my database i have a full dictionary of all english words. I coudn't find any mysql function which helps me. `LIKE` isn't helpfull in my situation.
There is a function that does roughly want you want, but it's intensive and will slow queries down. You might be able to use in your circumstances, I have used it before. It's called Levenshtein. You can get it here [How to add levenshtein function in mysql?](https://stackoverflow.com/questions/13909885/how-to-add-levenshtein-function-in-mysql)
you can use [soundex()](http://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_soundex) function for comparing phonetically your query should be something like: ``` select * from table where soundex(word) like soundex('helo'); ``` and this will return you the `hello` row
MYSQL search for right words | fixing spelling errors
[ "", "mysql", "sql", "spelling", "" ]