text
stringlengths
189
34.1k
" The article Daughter Duo is Dancing in The Same Company was written on December 11 2017, in the domain of entertainment, which states: There's a surprising twist to Regina Willoughby's last season with Columbia City Ballet: It's also her 18-year-old daughter Melina's first season with the company. Regina, 40, will retire from the stage in March, just as her daughter starts her own career as a trainee. But for this one season, they're sharing the stage together. Performing Side-By-Side In The Nutcracker Regina and Melina are not only dancing in the same Nutcracker this month, they're onstage at the same time: Regina is doing Snow Queen, while Melina is in the snow corps, and they're both in the Arabian divertissement. "It's very surreal to be dancing it together," says Regina. "I don't know that I ever thought Melina would take ballet this far." Left: Regina and Melina with another company member post-snow scene in 2003. Right: The pair post-snow scene in 2017 (in the same theater) Keep reading at dancemagazine.com. "
" The article New York City Ballet Announces Interim Leadership Team was written on December 11 2017, in the domain of entertainment, which states: The New York City Ballet Board of Directors announced on Saturday the interim team that has been appointed to run the artistic side of the company during ballet master in chief Peter Martins' leave of absence. Martins requested a temporary leave from both NYCB and the School of American Ballet last Thursday while the company undergoes an internal investigation into the sexual harassment accusations aimed at him. The four-person group is made up of members of the company's current artistic staff, led by ballet master and former principal dancer Jonathan Stafford. Joining Stafford are NYCB resident choreographer and soloist Justin Peck and ballet masters Craig Hall and Rebecca Krohn, both former dancers with the company. While the members of this group haven't had much leadership experience, their close familiarity with the company (Krohn left the stage for her new role just two months ago) should help to ease the dancers' transition. The team will be responsible for the day-to-day artistic needs of the company including scheduling, casting and conducting rehearsals. While there's no word yet on the length of their tenure, we'll continue to keep you updated as the story surrounding Martins unfolds. "
" The article Watch Pennsylvania Ballet & Boston Ballet Face Off for the Super Bowl was written on February 2 2018, in the domain of entertainment, which states: The Philadelphia Eagles and the New England Patriots aren't the only teams bringing Super Bowl entertainment this week. To celebrate game day (and cheer on their region's respective teams), the dancers of Pennsylvania Ballet and Boston Ballet took a break from their usual rehearsals to perform some Super Bowl-themed choreography. Dressed in their Eagles green, the PAB dancers performed a fast-paced routine full of fouetté turns, sky-high jumps and some swan arms (because they're known as the birds, get it?). But Boston Ballet also decided to get in on the fun—with five Super Bowl wins, they're used to seeing their team in the big game. Sharing their own video on Facebook, which stars principal Paul Craig and soloist Derek Dunn, Boston Ballet threw in a few Balanchine tricks thanks to some props from Prodigal Son. This is officially our new favorite way to get in on the football fun. "
" The article dance shoes was written on April 24 2018, in the domain of entertainment, which states: Looking for your next audition shoe? Shot at and in collaboration with Broadway Dance Center, Só Dança has launched a new collection of shoes working with some pretty famous faces of the musical theater world! Offered in two different styles and either 2.5" or 3" heels, top industry professionals are loving how versatile and supportive these shoes are! Pro tip: The heel is centered under the body so you can feel confident and stable! "
" The article Rebecca Krohn on Her Retirement from New York City Ballet was written on October 6 2017, in the domain of entertainment, which states: New York City Ballet principal dancer Rebecca Krohn will take her final bow with the company this Saturday night. Krohn joined NYCB as an apprentice in the fall of 1998 and slowly rose through the ranks, becoming a principal in 2012. Though Krohn is best known for her flawless execution of classic Balanchine leotard ballets, her repertoire is vast, spanning Jerome Robbins to Justin Peck. After dancing Stravinsky Violin Concerto with Amar Ramasar on Saturday, Krohn will return to the NYCB studios on Monday in a new role: ballet master. We had the chance to talk to the thoughtful and eloquent dancer about her time with the company and goals for the future. Was New York City Ballet always your dream company? As soon as I knew I wanted to be a professional dancer, I knew that I wanted to be in New York City Ballet. I moved to New York when I was 14 to train at the School of American Ballet, and I got my apprenticeship with the company when I was 17, so it was really a dream come true. Krohn and Adrian Danchig-Waring in Balanchine's Stravinsky Violin Concerto. Video Courtesy NYCB. What have been your favorite ballets or roles to dance? Balanchine's Stravinsky Violin Concerto, which I'll dance for my final show, has always been a favorite, as well as Balanchine's Movements for Piano and Orchestra and Agon. Also Robbins' Dances at a Gathering... there are so many, it's hard to choose! I've always really loved the Balanchine black and white ballets, and there are some Robbins ballets that are always so fulfilling. Can you think of a favorite moment with the company? After almost 20 years there are countless things. In general I would say the time that I've had onstage with some of my friends and dancing partners has been so special. It's one thing to be a friend with someone and another to also share the stage with them. There's just an amazing sense of trust and spontaneity; I feel so connected when I'm out there. That's something I'll never forget. What's the main way that your experience in the company has changed over the years? As I was getting older the company all of a sudden started to seem younger and younger. When I became a soloist and especially a principal my relationship with the corps de ballet dancers shifted. I wanted to be someone that the young dancers could look up to; I wanted to reach out and connect to them more, and to offer support and advice. Krohn and Amara Ramasar in Balanchine's "Movements for Piano and Orchestra." Photo by Paul Kolnik, Courtesy NYCB. Did you always know that you wanted to stay on with the company? It had been in the back of my mind for a number of years, but I didn't really address it formally until a year ago. I spoke to Peter (Martins), just to kind of let him know what I had been thinking. I wanted to hear how he felt about it, which was actually a little nerve-wracking, but he thought it was a great idea. What are you most looking forward to in your new role? I'd like to nurture them and their talents; I'm always amazed to see how talented everyone is. The ballets that we have in our repertoire are so amazing—it's a great honor to be able to carry them on with the new dancers for the future. Krohn with Robert Fairchild in Justin Peck's Everywhere We Go. Video Courtesy NYCB. Is there someone who's teaching style or mentorship style you'd most like to emulate? There are a couple of ballet masters that I've connected to. I'm very close to Karin von Aroldingen. Her undying passion for these pieces is incredibly inspiring. Susan Hendl has also been an inspiration. She has a wonderful talent of drawing out everyone's unique qualities and femininity. What parts of your life outside of ballet do you most look forward to cultivating now that you'll have more time on your hands? I'm looking forward to having more time to enjoy museums in the city. While I was dancing I didn't want to be up on my legs all day on my days off. I won't have to worry about that so much now, and I can spend my day off roaming around and being inspired. I also love to cook, so I'll get to cook a lot more and hopefully host more dinner parties. Krohn and Company in Balanchine's "Serenade." Photo by Paul Kolnik, Courtesy NYCB. Do you have a piece of advice for young dancers who are just starting out? What's so special about ballet is the discipline that it instills. It's important for young dancers to really understand that that is what's taught to you in ballet class every day. It's an invaluable quality for a person to have, whether they continue to dance or end up doing other things. My other piece of advice is that you have to treat each day as a new start. Some days you might not feel good about yourself, or things in your body might not be working well—every day is different. But you have to start fresh, be positive and move forward. "
" The article Roy Kaiser to Become Nevada Ballet Theatre's New Artistic Director was written on October 6 2017, in the domain of entertainment, which states: In 2014 the dance world was surprised when longtime Pennsylvania Ballet artistic director Roy Kaiser stepped down. It was announced yesterday that Kaiser will be rising to the helm again as the Las Vegas-based Nevada Ballet Theatre's new artistic director, replacing James Canfield. Kaiser will be the fourth artistic director in NBT's 46 year history. The company will be gaining a highly experienced leader. Following his rise through the ranks to principal dancer at Pennsylvania Ballet, Kaiser worked as a ballet master and eventually took the reigns as the company's artistic director in 1995. Pennsylvania Ballet added 90 new ballets and 35 world premieres to their repertoire under his leadership. Roy Kaiser with Pennsylvania Ballet Dancers. Photo by Alexander Iziliaev, Courtesy Nevada Ballet Theatre. NDT is the largest professional ballet company and dance academy in the state, with 35 company dancers and a vibrant school. Kaiser will hit the ground running with the company's 10th Anniversary Celebration of A Choreographer's Showcase, its collaboration with Cirque du Soleil, opening this weekend. In November the company will reference Kaiser's Balanchine roots with a program titled Classic Americana featuring Serenade and Western Symphony, as well Paul Taylor's Company B. "
" The article Nutcracker Secrets and Surprises was written on December 11 2017, in the domain of entertainment, which states: Literary Roots E.T.A. Hoffmann, a German writer, penned the eerie and dark tale "Nutcracker and Mouse King" in 1816. About 30 years later, the French writer Alexandre Dumas took the Nutcracker story into his own hands, lightening things up and softening the character descriptions. Dumas even cheered up the name of the protagonist. "Marie Stahlbaum" (meaning "steel tree," representing the repressive family Marie found herself in, which led her imagination to run wild) became "Clara Silberhaus" (translated to "silver house," a magnificent home filled with shiny magic.) Snowflakes of the original cast, "The Nutcracker" at the Mariinsky Theatre, 1892. Photo by Walter E. Owen, Courtesy Dance Magazine Archives. From Page to Stage In 1892 St. Petersburg, choreographer Marius Petipa and composer Pyotr Ilyich Tchaikovsky pulled the story off the page and onto the stage of the Mariinsky Theatre. But Petipa fell ill while choreographing The Nutcracker and handed his duties over to his assistant, Lev Ivanov. Critics at the 1892 premiere were not pleased. Balletomanes felt the work to be uneven, and lamented the lack of a main ballerina in the first act. Many thought that the story was too light compared to historically based stories. Out of Russia Despite its initial reception, the ballet survived, partially due to the success of Tchaikovsky's score. Performances were scarce, though, as the Russian Revolution scattered its original dancers. The Nutcracker's first major exposure outside of Russia took place in London in 1934. Former Mariinsky ballet master Nikolas Sergeyev was tasked with staging Petipa's story ballets on the Vic-Wells Ballet (today The Royal Ballet) from the original notation. The notes were incomplete and difficult to read, yet Sergeyev persisted, and The Nutcracker made it to the stage. Dancers from ballet Russe de Monte Carlo in "The Nutcracker" pas de deux. Photo Courtesy Dance Magazine Archives. An American Premiere The Ballet Russe de Monte Carlo brought an abridged version of The Nutcracker to the U.S. in 1940. Over the next decade, the company toured the ballet extensively, exposing it to audiences nationwide. Willam Christensen (center) with his brothers Lew and Harold. Photo Courtesy San Francisco Ballet. Across the Country… In 1944, San Francisco Ballet founding artistic director Willam Christensen choreographed the U.S.'s first full-length Nutcracker. Christensen later founded Ballet West, which continues to perform his version of The Nutcracker each year. Balanchine rehearsing the snow scene with NYCB. Photo by Frederick Melton, Courtesy Dance Magazine Archives. A Christmas Staple Though the ballet's popularity was already growing, some historians suggest that George Balanchine was the first to irretrievably link the work to the holidays. As dance critic Robert Greskovic puts it, Balanchine was "responsible for making the ballet a fixture of the Christmas season and of a ballet company's repertory." New York City Ballet first presented Balanchine's Nutcracker in February of 1954 but quickly recognized its holiday appeal and moved the ballet to December for the following year. Nutcracker All Over As regional ballet companies sprouted around the country, The Nutcracker became a staple.Today it's a holiday tradition that keeps families coming back year after year; its mass appeal keeps ballet in mainstream culture. Many companies attract audiences by infusing the classic with their own regional heritage: Christopher Wheeldon's Nutcracker for the Joffrey Ballet is set at Chicago's 1893 world's fair and The Washington Ballet serves a dose of American history with characters such as George Washington and King George III. George Washington in The Washington Ballet's Nutcracker." Photo by Carol Pratt, Courtesy The Washington Ballet. The Nutcracker also serves as the financial backbone of companies nationwide. Last year San Francisco Ballet sold a total of 87,926 tickets to the holiday ballet and Boston Ballet sold a total of 92,907. Despite its humble roots, The Nutcracker is now the show that companies rely on to put on inventive and cutting-edge works throughout the rest of the year. More secrets and surprises… According to dance historian Doug Fullington, in the original 1892 scenario the Nutcracker has two sisters who graciously welcome Clara to the Land of Sweets with warm hugs. Pennsylvania Ballet's Craig Wasserman in the Candy Cane variation. Photo by Alexander Iziliaev, Courtesy Pennsylvania Ballet. The Candy Cane variation (danced to the Russian Trepak music) was choreographed by its original 1892 dancer, Alexandre Shiryaev. Dance critic Mindy Aloff says that Shiryaev was "possibly the first practitioner of hand-drawn animation; he notated his choreography in sequential drawings that could be projected to show the dance in movement." Balanchine included Shiryaev's original choreography in his Nutcracker. The ethereal twinkling sound in the Sugar Plum Fairy's solo comes from the celesta, a rare instrument Tchaikovsky heard in France. "He had one sent to him essentially in secret," says Fullington. Balanchine was given a budget of $40,000 for his 1954 premiere and, according to Aloff, he spent $25,000 on the Christmas tree alone. When asked if he could do without the tree Balanchine responded, "[The ballet] is the tree." Today, New York City Ballet's tree weighs one ton and can reach a full height of 41 feet. 1892 "Nutcracker" costume sketch by Ivan Vsevolozhsky of the Sugar Plum Fairy's retinue. Courtesy Peter Koppers. Choreographic notations suggest that the Cavalier's variation was originally danced by a retinue of eight female fairies representing things like fruit, flowers and dreams. According to Fullington, Pavel Gerdt, the dancer who created the role, was likely too old to dance the variation himself. NYCB's Brittany Pollack and Chase Finlay in the grand pas de deux toe slide. Photo by Paul Kolnik, Courtesy New York City Ballet. In Balanchine's grand pas de deux, the lead ballerina holds an arabesque while gliding across the stage on pointe, pulled by her gallant prince. According to Fullington, Balanchine took this slide from Ivanov's original choreography. The Sugar Plum Fairy's prince's original name was "Prince Coqueluche." Meaning "whooping cough" in French, it likely referred to a lozenge candy. NYCB's Unity Phelan and Silas Farley in Karinska's Hot Chocolate costumes. Photo by Paul Kolnik, Courtesy New York City Ballet. "
" The article Inside the Beijing Dance Academy was written on March 19 2018, in the domain of entertainment, which states: In one of 60 spacious dance studios at the Beijing Dance Academy, Pei Yu Meng practices a tricky step from Jorma Elo's Over Glow. She's standing among other students, but they all work alone, with the help of teachers calling out corrections from the front of the room. On top of her strong classical foundation and clean balletic lines, Pei Yu's slithery coordination and laser-sharp focus give her dancing a polished gleam. Once she's mastered the pirouette she's been struggling with, she repeats the step over and over until the clock reaches 12 pm for lunch. Here, every moment is a chance to approach perfection. Pei Yu came to the school at age 10 from Hebei, a province near Beijing. Now 20, and in her third year of BDA's professional program, she is an example of a new kind of Chinese ballet student. Founded in 1954 by the country's communist government, BDA is a fully state-funded professional training school with close to 3,000 students and 275 full-time teachers over four departments (ballet, classical Chinese dance, social dance and musical theater). It offers degrees in performance, choreography and more. BDA's ballet program has long been known for fostering pristine Russian-style talent. But since 2011, the school has made major efforts to broaden ballet students' knowledge of Chinese dance traditions and the works of Western contemporary ballet choreographers. Pointe went inside this prestigious academy to see how BDA trains its dancers. Getting In BDA's admission process is extremely competitive, despite the school's large numbers. The ballet program is made up of a lower division, lasting seven years, and a four-year professional bachelor program. The professional division's admission procedure is extensive. Every year, hundreds of students ages 16 to 18 audition in Beijing over the course of two days, presenting classical and contemporary variations and improvisational work, and taking an academic exam. "We are looking to produce artists with the technical skills to excel in professional companies and the knowledge to work in all jobs in the field of dance," says the ballet department's executive and artistic director and former National Ballet of China principal Zhirui (Regina) Zou. Nearly 100 are currently enrolled in the professional ballet program. Though the school does admit foreign applicants, it does not host international students very often because the academic entrance exam measures Chinese language proficiency (most classes are taught in Chinese). BDA does participate in exchange programs with ballet schools around the world. A Typical Day Students begin their days with an early 8 am technique class. Following the Vaganova method, classes are strict and focus on precise positions and placement. Upper levels are split to keep class size small—around eight students per class. Teachers correct individual students—usually only the best ones, positioned front and center—using the terms "not good" (bù haˇo) and "better" (gèng haˇo), but rarely awarding praise. The day continues with classical Chinese dance, character, contemporary, repertoire and pas de deux, as well as dance history, anatomy, music appreciation and injury prevention. "Classical Chinese dance is a large part of our identity as Chinese ballet dancers," explains Zou. She points out an example from a girls' ballet class, where students circle their heads as if in a reverse renversé during an attitude promenade. "Chinese dance focuses on circular upper-body movements, a unique coordination that complements ballet technique." Rehearsals and classes can end as late as 9 pm. Students live on campus in dormitories; with little free time and all focus placed on their futures, they consider BDA home until graduation. BDA's ballet department in a performance of "La Bayadère." Photo Courtesy BDA. Stage Time Performance is the most important aspect of BDA students' professional development, with annual productions featuring classical ballets, contemporary works and student choreography. Since dancers don't usually audition internationally, these performances are their chance to be discovered—directors from surrounding Chinese companies, including the National Ballet of China, attend in order to scout new talent. As a result, preparation is intense. In a studio rehearsal for La Bayadère, Act II, no understudies are present, and any imperfection is pointed out by one of four coaches at the front of the room. All lines, heads, arms and feet are perfectly placed. Although Pei Yu sparkles in her variation, the other dancers are similarly strong and dedicated. Students run the piece twice for stamina. Between run-throughs, each fastidiously practices difficult sections, never satisfied with the results. Dancers approach more contemporary movement with a mature coordination mirroring many professional dancers. Recent performances have included works by Paul Taylor, Jorma Elo and Christopher Wheeldon; students often get to work with the choreographers directly. Pei Yu learned Over Glow from Elo himself. "He showed us how to handle rhythm with the whole body," she says. "Ballet has so many rules, but contemporary ballet makes me feel excited and free." Sun Jie, a coach and men's teacher at BDA since 2008, explains how introducing works from Western choreographers has broadened the overall abilities of Chinese ballet dancers. "When we started to teach new works at BDA in 2011, students struggled to move freely or adapt to new movement," he says. "But learning these styles over time has opened dancers' eyes to new possibilities." Life After Graduation BDA students enter professional life somewhat older than in the West, with graduates ranging from 20 to 22 years old. Only the most promising students receive company contracts, but others accept teaching and other dance-related posts at BDA and surrounding dance schools and institutions. Although many have won awards at international competitions, the school does not actively focus on competing. "To prepare competitors, so much attention must be placed on individual students, whereas performances encourage the entire student body," says Zou. Even so, competitions have given these students international exposure, though only a small percentage of graduates accept jobs abroad. BDA alumni in American companies include San Francisco Ballet soloists Wei Wang and Wanting Zhao and ABT corps members Zhiyao Zhang and Xuelan Lu. With graduation in sight, Pei Yu shares the same dream as many of her classmates: a spot with the National Ballet of China. P A men's classical Chinese dance class. Photo by Lucy Van Cleef. Beijing's Bournonville Connection Exposure to the Danish Bournonville style is a special component of the diverse ballet education that BDA offers. Former Royal Danish Ballet artistic director Frank Andersen has been a guest teacher at the school since 2002, and was awarded a professorship in 2012. So far, BDA students have performed in Bournonville ballets including Napoli's Act III, La Ventana and Conservatory, and some danced in the National Ballet of China's 2015 production of La Sylphide. Thanks to almost 23 years of Andersen's work in Beijing, Bournonville has found a second home in China. Though there are Bournonville technique classes when time allows, Andersen imparts those lessons through the repertoire and Danish mime. "The most important part is making the mime believable," Andersen explains. "Young dancers often have the urge to overact. If I can't describe what I want with words, I have to show them." He holds his hands towards his chest, indicating the sign for "I." "Showing can be more effective than telling. That's the beauty of Bournonville's work. It's so honest." "
" The article dance shoes was written on April 24 2018, in the domain of entertainment, which states: Looking for your next audition shoe? Shot at and in collaboration with Broadway Dance Center, Só Dança has launched a new collection of shoes working with some pretty famous faces of the musical theater world! Offered in two different styles and either 2.5" or 3" heels, top industry professionals are loving how versatile and supportive these shoes are! Pro tip: The heel is centered under the body so you can feel confident and stable! "
" The article Isabella Boylston and James Whiteside Get Hilariously Candid was written on February 2 2018, in the domain of tech, which states: Though American Ballet Theatre principals James Whiteside and Isabella Boylston have long displayed their envy-worthy friendship on Instagram, this week the Cindies (their nickname for each other) offered viewers an even deeper glimpse into their world. While on tour with ABT at the Kennedy Center, the duo sat down in front of the camera to answer some questions from their fans via Facebook Live. Starbucks in hand, they discuss their mutual love of food (particularly pasta and Japanese curry), the story behind the Cindy nickname and what it's like picking up contemporary choreography versus classical. Boylston also delves into her experience guesting with the Paris Opéra Ballet, her dream of choreographing an avant-garde ballet on Whiteside to a Carly Rae Jepsen song and best and worst Kennedy Center memories (like the time she fell onstage while doing fouettés at the end of La Bayadère's first act). Whiteside, on the other hand, imitates a unicorn, talks about preparing for roles and creates a new middle name for Boylston. The twosome also offer heartfelt advice for aspiring professional dancers. Check out the highlights in this video below; for the full 24-minute version, click here. "
" The article Ballet Performances This Week was written on March 19 2018, in the domain of entertainment, which states: What's going on in ballet this week? We've pulled together some highlights. The Bolshoi Premiere of John Neumeier's Anna Karenina Last July Hamburg Ballet presented the world premiere of John Neumeier's Anna Karenina, a modern adaptation on Leo Tolstoy's famous novel. Hamburg Ballet coproduced the full-length ballet with the National Ballet of Canada and the Bolshoi, the latter of which will premiere the work March 23 (NBoC will have its premiere in November). The production will feature Bolshoi star Svetlana Zakharova in the title role. This is especially fitting as Neumeier's initial inspiration for the ballet came from Zakharova while they were working together on his Lady of the Camellias. The following video delves into what makes this production stand out. World Premieres at Richmond Ballet and Ballet Arizona Richmond Ballet's New Works Festival March 20-25 features pieces by four choreographers who have never worked with Richmond Ballet before: Francesca Harper, Tom Mattingly, Mariana Oliveira and Bradley Shelver. But there's a twist: each choreographer had only 25 hours with the dancers to create a 10-15 minute ballet. Meanwhile the Phoenix-based company's spring season opens with Today's Masters 2018, March 22-25. The program includes a company premiere by Alejandro Cerrudo and world premieres by Nayon Iovino and artistic director Ib Andersen. Andersen's Pelvis features dance moves and costumes from the 1950s and references to Elvis Presley (pElvis, anyone?) San Francisco Ballet Honors Robbins Mysterious; romantic; witty; electrifying. That's how SFB describes their upcoming tribute to Jerome Robbins, March 20-25. The company is one of dozens of others honoring Robbins this year; last week we covered Cincinnati Ballet and New York Theatre Ballet. SFB is presenting four works celebrating the famed choreographer's career in ballet and Broadway: Fancy Free, Opus 19/The Dreamer, The Cage and Other Dances. Reid and Harriet Designs at the Guggenheim March 25-27, costume design duo Reid Bartelme and Harriet Jung take the stage as part of Guggenheim Works & Process. The partnership is known for creatively intersecting design and dance; last summer they created a swimwear line based on Justin Peck costumes, and in November they presented their design-driven Nutcracker. For this week's show they collaborated with a long list of choreographers including Lar Lubovitch and Pam Tanowitz to create short works featuring their costumes. A number of dancers including New York City Ballet principal Russell Janzen will be acting as moving models. "
" The article Guillaume Côté on NBoC's "Frame by Frame" was written on May 30 2018, in the domain of entertainment, which states: This week marks the world premiere of Frame by Frame, The National Ballet of Canada's new full length ballet based on the life and work of innovative filmmaker Norman McLaren. While those outside of the cinephile community might not be familiar with McLaren's work, he is commonly credited with advancing film techniques including animation and pixilation in the 20th century—he died in 1987. The Canadian artist's many accolades include a 1952 Oscar for Best Documentary for his abstract short film Neighbours (watch the whole thing here). Later in life, McLaren became interested in ballet, and made a number of dance films including his renowned 1968 Pas de deux. NBoC's new work, titled Frame by Frame, will run June 1-10 in Toronto. The ballet combines vignettes of McLaren's life with movement quotes from his films and real time recreations of his technological advances. It was created in collaboration by NBoC principal dancer and choreographic associate Guillaume Côté and film and stage director Robert Lepage, who is making his NBoC debut. Pointe touched base with Côté on how this interdisciplinary project came together. Where did the initial spark for this idea come from? Robert and I have been wanting to work together for years. I approached him about doing a different project a long time ago, and he said "well, maybe that's not the correct project." He came to me about four years after that and said "I think I finally have something I'd like to work on with you," so then I approached the National Ballet. Were you familiar with McLaren's work before this project? I wasn't. Robert had just worked on a big McLaren documentary, and he got me to come see it and I realized that it was all about movement, and that this animator was basically a choreographer himself, a choreographer of space and time. There was all this material to work with and he'd made a number of iconic dance films, so it seemed like a no brainer. So I started my research and kept finding out more surprising fun facts about McLaren's passion for dance, like that he met Guy Glover, the man that he was in a relationship with for 50 years or so, in the audience at Covent Garden while watching a ballet. Guy was a curator of a dance festival in Canada. Robert Lepage and Guillaume Côté in rehearsal for "Frame By Frame." Photo by Elias Djemil-Matassov, Courtesy NBoC. What was the research process like? Robert had just finished his documentary and has a really deep understanding of this kind of Canadian culture. He'd already gotten incredible footage from the National Film Board of Canada of McLaren behind-the-scenes. I watched a tremendous amount of film, and I read a lot about him and his collaborators, and even met a few who are still alive. The research was truly enriching, because I realized how wonderful of a person he was as well, which dictates how we share his personal life onstage. What was the timeline of the project? We had our first workshop four years ago. Since then we've been doing five-day workshops once a year in Robert's studio, Ex Machina, where he has a multi media team. They would put together projections and props for us to experiment with. Robert would give me some homework, and I'd take it on myself to create some impressionist sections. Sometimes we decided that they were great, and sometimes we decided that they weren't. It was this really collaborative way of getting things started because I was able to present dance first, and then we were able to add technology to it, as opposed to the technology taking over and stealing from the dance. Artists of the Ballet in rehearsal for "Frame by Frame." Photo by David Leclerc, Courtesy NBoC. How does the piece balance a narrative retelling of McLaren's life with reproductions of his works? I would say that the vignettes of his life are one third, and then the technologies that he pioneered and making abstract dance interpretations based on those technologies like stop motion or body painting and body projection are the second third. And then the last third is basically just direct quotes from his films and bringing them back to life. Like with Pas de deux, the effects in that film took him months to make, but now thanks to technology we can duplicate it live. It's not a story ballet per say, but there is a story from beginning to end. What was the process like of taking movement quotes from McLaren's films? "
" The article Broadway's "Carousel" Stars Some Familiar Ballet Faces was written on April 24 2018, in the domain of entertainment, which states: The Broadway revival of Richard Rogers and Oscar Hammerstein's Carousel opened last week, and while it stars luminaries from the worlds of musical theater (Joshua Henry, Jessie Mueller) and opera (soprano Renée Fleming), it also stars choreography by one of ballet's own heavy hitters: New York City Ballet soloist and resident choreographer Justin Peck, who shares top billing with the musical's director, Jack O'Brien. There are more than a few familiar faces onstage, too. NYCB principal Amar Ramasar is cast as ne'er-do-well sailor Jigger Craigin, while NYCB soloist Brittany Pollack plays Louise, who dances Act II's famous "dream ballet." American Ballet Theatre soloist Craig Salstein took a leave of absence from the company to serve as the show's dance captain and to perform in the ensemble, where he's joined by recent Miami City Ballet transplants Adriana Pierce and Andrei Chagas (a Pointe 2015 Star of the Corps). Several other veteran Broadway ballet dancers round out the cast, including An American in Paris alumni Leigh-Ann Esty (Miami City Ballet), David Prottas (NYCB) and Laura Feig (Atlanta Ballet, BalletX), and Come Fly Away's Amy Ruggiero (American Repertory Ballet, Ballet Austin, Twyla Tharp). "CBS Sunday Morning" recently ran a lengthy profile on Peck, who at age 30 has already established himself as one of the world's most in-demand choreographers. In addition to shedding light on his efforts to make ballet more accessible to modern audiences ("I don't want ballet to feel like an elitist art form"), Peck answers the question on everyone's mind in the post Peter Martins era: whether he's interested in becoming NYCB's next director. The profile also includes fun behind-the-scenes Carousel footage—check it out above. "
" The article The Joffrey Presents Ekman's "Midsummer Night's Dream" was written on April 24 2018, in the domain of entertainment, which states: This spring, The Joffrey Ballet will present the North American premiere of Alexander Ekman's Midsummer Night's Dream. The Swedish choreographer is best known for his absurdist and cutting-edge productions. "This is not Shakespeare's Midsummer," says Joffrey Ballet artistic director Ashley Wheater. The title of Ekman's version, which premiered with the Royal Swedish Ballet in 2015, refers not to Shakespeare but to Midsummer, the traditional Scandinavian summer solstice festival. The piece follows a young man through a day of revelry followed by a nightmare, blurring the line with reality. "It's a kind of otherworldly dream," says Wheater. Bringing Ekman's production to life is no small feat; the piece utilizes the entire Joffrey company. "I can't think of another performance that has so many props," says Wheater, listing giant bales of hay, long banquet tables, umbrellas, beach chairs and more. The piece features a commissioned score by Swedish composer Mikael Karlsson, which will be performed onstage by singer Anna von Hausswolff. "She is very much a part of the performance; she's kind of the narrator," says Wheater. Dancers also contribute to the narration with spoken text, including imagery of young love and a dose of humor. The Royal Swedish Ballet in Alexander Ekman's "Midsummer Night's Dream." Photo by Hans Nilsson, Courtesy Joffrey Ballet. This will be the fourth work by Ekman that The Joffrey has performed. "I think it says something that Alex trusts us to bring the work to its full realization," says Wheater. "It's not just a few ballet steps here and there; he asks you to fully engage with yourself, not only as a dancer but as an actor and a person." Ekman's Midsummer will run April 25–May 6 at the Auditorium Theatre in Chicago. "
" The article Bay Community News was written on October 6 2017, in the domain of entertainment, which states: The Bay County Sheriff’s Office has arrested four men from Atlanta after they allegedly hired local transients to cash fraudulent checks for them using stolen personal information. Three transients have also been arrested for their participation. BCSO Criminal Investigations opened an investigation after deputies initially responded to a call from a local transient who claimed he had been robbed by two men. Investigators now believe that four black males came from Atlanta to the Destin area on October 1, 2017, and rented two vehicles. On October 3, 2017, they came to Bay County and two of them─Johnathan Johnson and Germarco Johnson─ picked up transient Scott Allen McNeight, age 44. McNeight was recently released from prison on October 1, 2017. They first asked McNeight if he had a photo ID, which he did. The men took a photo of the ID and sent it to the two other men from Atlanta. A fraudulent check was created using McNeight’s information and stolen routing numbers and checking account numbers. The other two suspects from Atlanta─Clarence Suggs and Reginald Hughes─ picked up transient James Dean Riles, age 58, for the same purpose. The two transients were first taken to a thrift store in Springfield. The Atlanta men purchased better clothing for the transients to wear when they entered banks to cash the fraudulent checks. Once dressed, McNeight was taken to a bank on Panama City Beach and was able to cash a 1,600 check. Although the agreement was for the transients to get about $80 for their part in the scam, McNeight took half of the money and fled on foot. Demarko Johnson and Johnathan Johnson were able to find McNeight at a gas station at Magnolia Beach Road and Thomas Drive. The two men entered the gas station after McNeight, one of them carrying a tire iron. Johnathon Johnson put McNeight in a choke hold, and they threatened McNeight with the tire iron, and took the money, his wallet, and his cell phone. That was when McNeight called the BCSO to report the robbery. Although initially not forthcoming about the true circumstances surrounding the robbery, McNeight eventually told investigators how he got the money. James Dean Riles, the other transient, was unable to cash his fraudulent check at the first bank, and was taken by Clarence Suggs and Reginald Hughes to a second bank where he was successful. He was paid $80 and was taken by the two men to Millville and left. Riles then called the Panama City Police Department to file a complaint about what he had done. Riles had important tag information on one of the vehicles. Using the tag information, investigators were able to learn the two vehicles were rentals and eventually identified the four men from Atlanta. A BOLO was put out to local law enforcement on the vehicles. One was located at a business on East Avenue and contact was made with Clarence Suggs, age 27, and Reginald Hughes, age 26. They were arrested. Suggs and Hughes were charged with Counterfeiting of an Instrument with Intent to Defraud (two counts), Principal/Accessory to Uttering a False Bank Bill (two counts), and Larceny $300 or more but less than $5000 (two counts). A BCSO investigator spotted the second vehicle in a grocery store parking lot in Lynn Haven. He watched as a man left a bank adjacent to the parking lot, and got into the vehicle with two black males. A traffic stop was done and Johnathan Johnson, age 22, and Germarco Johnson, age 27, cousins, were arrested. Johnathan Johnson was charged with Robbery with a Weapon, Counterfeiting of an Instrument with Intent to Defraud (two counts), Principal/Accessory to Uttering a False Bank Bill (two counts), Larceny $300 or more but less than $5000 (two counts), and Violation of Probation for Financial Identity Fraud. Germarco Johnson was charged with Robbery with a Firearm, Counterfeiting of an Instrument with Intent to Defraud (two counts), Principal/Accessory to Uttering a False Bank Bill (two counts), and Larceny $300 or more but less than $5000 (two counts). The white male with them was identified as Charles Edward Sinard, age 39, a transient. He was also arrested and charged with Uttering a False Bank Bill, Larceny $300 or more but less than $5000, and Criminal Mischief, $1000 or more. The other two transients involved in this case, Riles and McNeight, were also arrested and charged with Uttering a False Bank Bill and Larceny $300 or more but less than $5000. During a search warrant on the two vehicles two printers, blank checks, cash, and a computer with check-making software were located and seized. All seven men were taken to the Bay County Jail and booked. 34 total views, 34 views today Share Us "
" The article 25 Year Old Molests Child Under 12 was written on October 6 2017, in the domain of entertainment, which states: The Bay County Sheriff’s Office announced the arrest of a local man on charges he committed sexual battery on a child. The victim confided in a family member that Shanard Cameron had molested her. Cameron, age 25, allegedly took the victim without parental consent to a festival and had sex with her. The victim was interviewed at Gulf Coast Children’s Advocacy Center. Contact was made with Shanard Cameron and he was subsequently arrested and charged with Sexual Battery on a Child Under the Age of Twelve. 36 total views, 36 views today Share Us "
" The article How to install the Trac project management tool on Ubuntu 16.04 was written on May 30 2018, in the domain of tech, which states: One of the most challenging tasks on an admin's list is the management of projects and tickets. This can be especially overwhelming when you have a larger IT department and a staff working on numerous projects at once. But the management of projects and ticket issues doesn't just fall on the heads of large companies. Even if you're a one-person shop consultancy, it can be easy to drown in a quagmire of projects. Thankfully, there are a lot of tools available to help you with that. If you happen to be a fan of open source (and who isn't?), those tools are not only readily available, they are free and (generally speaking) easy to set up. I want to walk you through the installation of one such tool: Trac. Trac can be used as a wiki, a project management system, and for tracking bugs in software development. I'll be demonstrating the installation on a Ubuntu Server 16.04. And so, let's get to it. Installation The installation isn't terribly challenging, but does require a bit of typing. Log into your Ubuntu server and let's take care of the dependencies. If your Ubuntu Server platform is without Apache, install it with the command: sudo apt-get install apache2 -y Once that completes, install Trac with the command: sudo apt-get install trac libapache2-mod-wsgi -y Next, the auth_digest module must be enabled with the command: sudo a2enmod auth_digest Now we create the necessary document root for Trac (and give it the correct permissions) with the following commands: sudo mkdir /var/lib/trac sudo mkdir -p /var/www/html/trac sudo chown www-data:www-data /var/www/html/trac For our next trick, we create a Trac project directory (we'll call it test) with the command: sudo trac-admin /var/lib/trac/test initenv test sqlite:db/trac.db Time to give that new directory the proper permissions. This is done by issuing the following commands: sudo trac-admin /var/lib/trac/test deploy /var/www/html/trac/test sudo chown -R www-data:www-data /var/lib/trac/test sudo chown -R www-data:www-data /var/www/html/trac/test Finally, we create both an admin user and a standard user with the commands: sudo htdigest -c /var/lib/trac/test/.htdigest "test" admin sudo htdigest /var/lib/trac/test/.htdigest "test" USER Where USER is the name of the user you prefere. After both of the above commands, you'll be prompted to type (and confirm) a password. Remember these passwords. Configure Apache Create an Apache .conf file for Trac with the following command: sudo nano /etc/apache2/sites-available/trac.conf Add the following content to the new file: WSGIScriptAlias /trac/test /var/www/html/trac/test/cgi-bin/trac.wsgi AuthType Digest AuthName "test" AuthUserFile /var/lib/trac/test/.htdigest Require valid-user Save and close that file. Enable our new site (and restart Apache) with the following commands: sudo a2ensite trac sudo systemctl restart apache2 Accessing Trac Open a web browser and point it to http://SERVER_IP/trac/test (where SERVER_IP is the IP address of your server). You will be prompted for login credentials. Log in with the user admin and the password you set when you created admin user earlier. You can also login with the non-admin user you created. There is actually no difference between the users (which could be a deal-breaker for some). One thing to note: If you need more users, you'll create them from the command line (in similar fashion as you did above). For every user you need to add to Trac, issue the command: sudo htdigest /var/lib/trac/test/.htdigest "test" USER Where USER is the username. The above command will add the user to the project test. If you need to create more projects, you must go back through the process of creating a new project directory and then add users to it. Once you've successfully authenticated, you'll be presented with the Trac web interface, were you can begin working (Figure A). Figure A You can now go to Preferences and configure your installation of Trac. Once you've completed that, you can begin creating new tickets. If you get the (please configure the [header_logo] section in trac.ini) error, you can configure this with the command: sudo nano /var/lib/trac/test/conf/trac.ini In that file, you'll see the section: [header_logo] alt = (please configure the [header_logo] section in trac.ini) height = -1 link = src = site/your_project_logo.png width = -1 That is where you'll configure a header logo to suit your company. SEE: IT project management: 10 ways to stay under budget (free PDF) (TechRepublic) Congratulations You now have your Trac system up and running. Although you might find systems with more features (and a more powerful configuration system), Trac is very simple to set up and use. If you're looking for a basic ticketing system that can serve as a project management tool, Trac just might fit the bill. Automatically sign up for TechRepublic's Open Source Weekly Newsletter for more hot tips and tricks. Subscribe Also see "
" The article New Amazon class certifies cloud pros in securing data on AWS was written on April 24 2018, in the domain of politics, which states: Building a slide deck, pitch, or presentation? Here are the big takeaways: A new class from Amazon, the AWS Certified Security - Specialty Exam, will validate a cloud pro's ability to secure the AWS platform. Cloud skills are in high demand, but added security expertise could help set job seekers apart. A new professional exam from Amazon Web Services (AWS) will help cloud experts validate their ability to secure data on the platform, according to a Monday blog post. The AWS Certified Security - Specialty Exam is now available to those who hold either an Associate or Cloud Practitioner certification from AWS. As noted in the post, AWS recommends that those taking the exam have at least five years experience working in IT security and two years experience working on AWS workloads. The exam will deal with such topics as "incident response, logging and monitoring, infrastructure security, identity and access management, and data protection," the post said. The exam consists of 65 multiple choice questions and will likely take 170 minutes to complete. The registration fee is $300. SEE: Cloud computing policy (Tech Pro Research) According to the post, once the exam is complete, the test taker will have a working knowledge or understanding of the following: Specialized data classifications on AWS AWS data protection mechanisms Data encryption methods and AWS mechanisms to implement them Secure Internet protocols and how to implement them on AWS AWS security services and features Additionally, the post noted that those who pass the exam will have a competency in working with AWS security services in production, an understanding of security operations, and the "ability to make tradeoff decisions with regard to cost, security, and deployment complexity given a set of application requirements." For those looking to prepare for the exam, the post recommends going to the AWS Training website and working on the Advanced Architecting on AWS and Security Operations on AWS trainings. Additional security trainings on AWS Security Fundamentals, Authentication and Authorization with AWS Identity and Access Management, AWS Shared Responsibility Model, and AWS Well-Architected Training are also helpful. Additionally, compliance and security whitepapers will also help prepare would-be test takers. Stay informed, click here to subscribe to the TechRepublic Cloud Insights newsletter. Subscribe Also see "
" The article Qualcomm XR1 chip could bring faster, cheaper AR/VR to the enterprise was written on May 30 2018, in the domain of politics, which states: Qualcomm's new Snapdragon XR1 chip, announced via a Tuesday press release, aims to break down the barrier for high-quality virtual reality (VR) and augmented reality (AR) and bring the technologies to lower-end devices. If successful, the XR1 chip could improve technologies found in modern smart glasses, and make VR and AR more affordable to get into for smaller companies. The chip could also help bring more artificial intelligence (AI) functionality into AR as well, the release noted. In its release, Qualcomm called the XR1 an Extended Reality (XR) platform, noting that it will help bring higher quality experiences to mass-produced devices. And the addition of the AI capabilities will provide "better interactivity, power consumption and thermal efficiency," the release said. SEE: Virtual and augmented reality policy (Tech Pro Research) The XR1 features an ARM-based multi-core CPU, a vector processor, a GPU, and a dedicated AI engine for on-board processing. A software layer with dedicated machine learning, connectivity, and security is also part of the platform, the release said. The chip can handle up to 4K definition at 60 frames per second, according to the release. It also supports OpenGL, OpenCL, and Vulkan, and its AI capabilities contribute to computer vision features. Other hallmarks of the XR1 are high-fidelity audio and six-degrees of freedom (6DoF) head tracking and controller capabilities, making it easier to get around in the virtual world. "As technology evolves and consumer demand grows, we envision XR devices playing a wider variety of roles in consumers' and workers' daily lives," Alex Katouzian, senior vice president and general manager of Qualcomm's Mobile Business Unit, said in the release. OEMs like Meta, VIVE, Vuzix, and Picoare are already building on the XR1 platform, the release said. The big takeaways for tech leaders: Qualcomm has unveiled the Snapdragon XR1 chip, which could bring high-quality AR and VR experiences to more users, at a lower cost. The Qualcomm Snapdragon XR1 features an on-board AI engine to boost computer vision capabilities in AR applications. Stay informed, click here to subscribe to the TechRepublic Next Big Thing newsletter. Subscribe Also see "
" The article Why are companies moving to the cloud? 81% simply fear 'missing out' was written on August 14 2017, in the domain of tech, which states: If you've ever wondered why so many companies are making moves toward the cloud, the answer may surprise you: It's fear of missing out. According to recent report from Commvault and CITO Research, 81% of business leaders are embracing the cloud because they're concerned about missing out on cloud advancements. So, just how many executives are making that move? According to the report, 93% of respondents said that at least some of their processes were being moved to the cloud. Additionally, 56% said that they had already moved, or intended to move, all of their processes to the cloud. "The survey unequivocally confirms that Cloud FOMO is real and on the mind of C-level and other IT leaders who are grappling with bringing the value of this new frontier to their organizations, from increasing IT outcomes to being a strategic driver for increased business agility," Dan Woods, CTO of CITO Research, said in the release. "The research indicates the migration toward the cloud is underway in full force, even as companies struggle to understand cloud capabilities. Data protection and recovery was highlighted as a fundamental area where the cloud is having significant business impact." SEE: Special report: The cloud v. data center decision (free PDF) Don Foster, senior director of solutions marketing for Commvault, said in the report that cloud technologies are still seen as a key driver for digital transformation. As such, it makes sense that these business leaders would be concerned about getting to the cloud quickly. The most important cloud projects cited by the respondents were data protection and backup, and data recovery, as noted by 75% of the respondents and 63% respectively. However, there are some challenges holding these companies back from realizing their cloudy dreams. The sheer volume of data was cited by 68% of those surveyed as a key barrier, while 65% pointed to a struggle with skills and talent, and 55% said policies were the biggest barrier. These business leaders are putting their money on the line, too. Some 87% plan on putting more money in their budget for cloud investments. The reasons for why these respondents wanted to move to the cloud were varied. Of those surveyed, 33% noted that "customer focus through business agility" was their primary reason for moving to the cloud. Cost savings were the primary reason for 22% of respondents and 20% said "innovation and development of new apps, products and services" was the driving force behind their cloud journey. The 3 big takeaways for TechRepublic readers Some 81% of business leaders said that they're moving to the cloud simply out of a fear of missing out on the tech advances provided by the technology, according to a report from Commvault. Of those questioned, 93% said they would be moving at least some of their processes to the cloud, while increasing their budget for it. Increasing agility and customer focus was the main reason for 33% of business leaders to make a move to the cloud. Stay informed, click here to subscribe to the TechRepublic Cloud Insights newsletter. Subscribe Also see "
" The article How to request your personal data under GDPR was written on April 24 2018, in the domain of tech, which states: Building a slide deck, pitch, or presentation? Here are the big takeaways: Individuals can get access to all of their data from a given firm, including their employer, by filing a subject access request. The GDPR will eliminate the cost for subject access requests and shorten the required response time from 40 days to 30. The May 25 deadline for the EU's General Data Protection Regulation (GDPR) is fast-approaching, and the coming changes will greatly shift the ability of companies to interact with customer data. Many people know the GDPR for its hard-line regulation around the "right to be forgotten," where an individual can request a company to erase the personal data it holds on them. However, it also contains the right to access any information that may be held by a company, including your employer. The process for data access under GDPR will be mostly the same as it was under the Data Protection Act of 1998, but with a few slight differences. For starters, a person will need to file a subject access request (SAR) that, as noted by the Guardian, is simply "an email, fax or letter asking for their personal data." SEE: EU General Data Protection Regulation (GDPR) policy (Tech Pro Research) For clear guidelines on submitting an SAR, see the Subject access code of practice from the Information Commissioner's Office (ICO). There is no particular format required, as long as the request is made in writing. There are two key differences between SAR requests made under the Data Privacy Act and those made under GDPR: The cost and time frame. Before GDPR, the maximum fee that could be charged for access to your data was £10, or about $14. Under GDPR, however, that fee is being removed for standard requests. Although, the ICO also notes that a firm may charge a "reasonable fee" when "a request is manifestly unfounded or excessive, particularly if it is repetitive." According to SAR guidelines from the ICO, an individual should have the personal data held on them described, be told whether their personal data is being processes, be told why it's being processed, be told if that data is being sent anywhere else, and be given a copy the data and details of its sourcing. The other detail that will change with personal data access under GDPR is how long companies have to respond to your request. Under the Data Privacy Act, companies had 40 calendar days to respond once they received a request. Now, however, they will have to provide the data within one month of receiving the request. The company can file for an extension of an extra two months if the "requests are complex or numerous," according to the ICO's right of access page. If the request is made electronically, the firm will provide the data in an accessible electronic format. However, the ICO's page notes that GDPR best practices recommend companies establish a secure self-service portal system for easy access. Stay informed, click here to subscribe to the TechRepublic Cybersecurity Insider newsletter. Subscribe Also see "
" The article 5 data protection policies your employees must know in the post-GDPR era was written on July 3 2018, in the domain of tech, which states: The EU's General Data Protection Regulation (GDPR) went into effect in May, requiring all organizations that handle the data of EU citizens to comply with its provisions regarding collecting and using personal data. However, a majority of companies likely missed the compliance deadline, and many employees remain unaware of the policies needed to keep data safe. "Data privacy is a hot topic with GDPR going into effect," said Dave Rickard, technical director at CIPHER Security. "An awful lot of companies may not think they have exposure to it, but there are lots of variables in that." For example, one online retailer Rickard works with has many customers from the EU, but can't geolocate them from the website. Others don't work with EU citizens, but have data processing and storage facilities there, which are also subject to GDPR. SEE: EU General Data Protection Regulation (GDPR) policy (Tech Pro Research) GDPR will likely influence data privacy policies in other countries, Rickard said. However, cultural differences, particularly between the EU and US, may make this difficult. "In the EU people are very centered on the perspective that 'My name, my social security number, my passport information, everything that is PII about me, belongs to me. It's part of my individuality,'" he said. "Whereas in North America, people have long since taken the perspective instead that data is currency. There are so many business models that are built on it. Data is money." The majority of companies that need to be compliant with GDPR are not yet, Rickard said. "I'd say compliance right now is only at about 35% or 40% at the most," he said. "I think a lot of people are taking a wait and see approach." Some of the bigger players like Facebook, Google, and Amazon are going to be the canaries in the coal mine, Rickard said. "I think that they'll have actions taken on them first, and people are going to wait and see if the actual GDPR penalties play out the way that they've been published." Companies that fail to comply with GDPR will face a penalty of either 4% of their global revenue or €20 million, whichever is greater. Here are five types of policies that companies must ensure they have in place and have trained employees on in the age of GDPR, according to Rickard. 1. Encryption policies Most companies lack policies around data encryption, Rickard said. "Most people who are data owners are unaware of whether their data is encrypted at rest or not," he added. "GDPR is big on encryption at rest." SEE: Encryption policy (Tech Pro Research) 2. Acceptable use policies An acceptable use policy should covers things like what applications are allowed, what web searching and social media habits are appropriate for the business, and the potential threats to brand reputation, Rickard said. 3. Password policies Passwords remain a common digital entry point into an organization for hackers. Even if, in the best case scenario, employees use complex passwords that are changed often and not shared, human error and carelessness can still put a business at risk. "One of the easiest ways to breach a company is to put somebody on the janitorial staff and go looking at desks," Rickard said. "People often have Post-it notes on monitors with passwords on them." 4. Email policies IT should have an email policy in place that hardens systems and can detect spam and viruses, Rickard said. "The kind of information that can be disclosed via email should be spelled out very clearly," he added. 5. Data processing policies Companies need to do data process flow mapping to see what data is being collected, how it's being processed, and who is receiving processed copies, Rickard said. "GDPR closes all those gaps," he added. Employee training is paramount for ensuring these policies are enforced, Rickard said. Raising awareness of the threat landscape and common vulnerabilities can help counteract human error. "Security awareness and training is the cornerstone of any security program," he added. For tips on how to best train employees on cybersecurity practices, click here. Stay up to date on all the latest cybersecurity threats. Click here to subscribe to the TechRepublic Cybersecurity Insider newsletter. Subscribe Also see "
" The article Why a cloud-friendly Java could finally be possible with Jakarta EE was written on April 18 2018, in the domain of tech, which states: Over the past two decades, Java has arguably been the most successful programming language on the planet. Go is cool, Swift is nifty, but old-school Java keeps reinventing itself to power both yesterday's and tomorrow's applications. Depending on how you count, some 14 million Java programmers code today, with many of them paid well to maintain massive enterprise applications (an estimated 80% of enterprise workloads run on Java). Redmonk, in its latest Q1 2018 survey, says Java is the second-most popular language after JavaScript among developers. Not that Java lacks challenges. For example, Java is perhaps the most divisive technology in the industry—a morass of competing vendors with a constipated governance model that excludes much more than it includes. In this way, Java has left obvious gaps and frustration for developers who need a bridge to a cloud-native future. SEE: Job description: Java developer (Tech Pro Research) To ease that frustration, Tuesday the Eclipse Foundation unveiled new directions for Java EE under the recently-named-by-community-vote Jakarta EE Working Group, the successor to Java EE (which remains licensed by Oracle and maintained under the JCP). Java, cloud-friendly? It just might happen. An open Java The one thing everyone agrees about Java is that it's imperfect. And yet there's hope. No longer your grandparent's Cobol, what if a vibrant community embraced Jakarta EE and pushed it much faster than any Java EE before? Under the Eclipse Foundation's guidance, we may finally get the power of open source collaboration to build on the best of Java's two decades of work. Through this new Jakarta EE Working Group process, we should see big Java EE vendors like IBM, Red Hat, and Oracle working within the open processes of the Eclipse Foundation with smaller vendors like Tomitribe and Payara. In this world, there's no single vendor to impose its will on Java. Instead, we may finally get a true code meritocracy where Java communities and individuals function as peers. Instead of a divisive force, Jakarta EE could become a catalyst to join disparate Java communities behind a shared goal. In this case, my bet is on a race to some version of cloud-native implementation for Jakarata EE. You can read all the details on the new Eclipse Foundation governance model online but for me, it's much more interesting to see where the community wants to go. To the credit of the Eclipse Foundation, they surveyed more than 1,800 Java developers worldwide to take the pulse of the Java community. Under Oracle's (or Sun's) control, this sort of community outreach simply didn't happen (though, to its credit, Oracle made the decision to move Java to the Eclipse Foundation's stewardship). Java's cloudy future In the survey, the Eclipse Foundation learned that the three most critical areas that developers want Jakarta EE to prioritize are: Better support for microservices (60%) Native integration with Kubernetes (57%) Faster pace of innovation (47%) SEE: How to build a successful developer career (free PDF) (TechRepublic) Almost half (45%) of the Java developers surveyed are already building microservices, with more (21%) planning to do so within the next 12 months. Add to this the fact that half of these developers currently only run a fifth of their Java applications in the cloud but 30% say they'll run 60% or more of their applications in the cloud, and it's clear how much pent-up demand there is for a more cloud-friendly Java. To get there, roughly a third of the developers surveyed have embraced Kubernetes. This is a cloud-savvy crowd that needs their preferred programming model to keep pace with their ambitions. None of this was a surprise, of course. Java developers aren't living in a cloud-free world. Developers want a framework of tools that helps them be more successful using the Java skills they already have to build next-generation, cloud native applications. With the new Jakarta EE, they just might get their wish. Click here to subscribe to TechRepublic's Cloud Insights newsletter. Subscribe Also see "
" The article Report: 32% of IT pros plan to switch jobs in 2018, most for better pay and training was written on December 11 2017, in the domain of tech, which states: High demand and low supply of IT professionals may lead to turnover in the new year, a new report found. Some 32% of IT professionals said they plan to search for or take an IT job with a new employer in 2018, according to Spiceworks' 2018 IT Career Outlook. Among those planning to make a job move, 75% said they are seeking a better salary, 70% said they want to advance their skills, and 39% said they want to work for a company that prioritizes IT more than the one they currently work for. Of the 2,163 IT professionals from North America and Europe surveyed, 7% said they plan to start working as a consultant, while 5% said they plan to leave the IT industry altogether. Another 2% reported plans to retire in 2018. Some employees said they expect positive changes from their current employer in the new year: 51% of IT professionals said they expect a raise from their current employer next year, while 21% said they also expect a promotion. However, 24% said they don't expect any career changes or raises in the next year. SEE: IT jobs 2018: Hiring priorities, growth areas, and strategies to fill open roles (Tech Pro Research) Millennials in particular (36%) were more likely to say they were seeking new employment—more than Gen Xers (32%) and baby boomers (23%). Millennial IT professionals are also more likely to leave their current employer to find a better salary, advance their skills, work for a more talented team, and receive better employee perks than older employees. Meanwhile, Gen X IT professionals are more likely to leave their jobs to seek a better work-life balance, while baby boomers are more likely to leave due to burnout. Despite those who plan to leave their jobs, 70% of IT professionals say they are satisfied with their current jobs—though 63% say they believe they are underpaid, the report found. This number is even higher among millennials: 68% of millennial IT workers feel underpaid, compared to 60% of Gen X and 61% of baby boomers. In terms of salary, millennial IT professionals are paid a median income of $50,000 per year, while Gen X IT professionals are paid $65,000, and baby boomers are paid $70,000. These salaries also correlate to years of experience, the report noted. In terms of tech skills needed to be successful in any IT job in the coming year, 81% of IT professionals reported that cybersecurity expertise was critical. Despite understanding how critical this area is, only 19% of IT pros reported having advanced cybersecurity knowledge—potentially putting organizations at risk. This echoes previous research about the dearth of cybersecurity professionals currently available to companies, as well as the need to upskill employees to fill security gaps. SEE: Cheat sheet: How to become a cybersecurity pro About 75% of IT professionals also said that it was critical to have experience in networking, infrastructure hardware, end-user devices, and storage and backup. Of these, 41% said they have advanced networking skills, 50% said they have advanced infrastructure hardware skills, and 79% said they are advanced in supporting and troubleshooting end user devices, including laptops, desktops, and tablets. "Although the majority of IT professionals are satisfied with their jobs, many also believe they should be making more money, and will take the initiative to find an employer who is willing to pay them what they're worth in 2018," Peter Tsai, senior technology analyst at Spiceworks, said in a press release. "Many IT professionals are also motivated to change jobs to advance their skills, particularly in cybersecurity. As data breaches and ransomware outbreaks continue to haunt businesses, IT professionals recognize there is high demand for skilled security professionals now, and in the years to come." Want to use this data in your next business presentation? Feel free to copy and paste these top takeaways into your next slideshow. 32% of IT professionals said they plan to search for or take an IT job with a new employer in 2018. -Spiceworks, 2017 Among IT pros planning to make a job move, 75% said they are seeking a better salary, 70% said they want to advance their skills, and 39% said they want to work for a company that prioritizes IT more. -Spiceworks, 2017 81% of IT professionals reported that cybersecurity expertise was critical in the field, but only 19% said they had advanced cybersecurity skills. -Spiceworks, 2017 Image: iStockphoto/Rawpixel Keep up to date on all of the latest leadership news. Click here to subscribe to the TechRepublic Executive Briefing newsletter. Subscribe Also see "
" The article How to protect your company from tax season phishing scams was written on March 19 2018, in the domain of tech, which states: Unfortunately, it seems there's a phishing scheme to go along with virtually every event in life, whether a holiday, a tragedy, or an annual ritual. Tax time is not exempt, so to speak. Whether you work in finance or you support users who do, it's important to be on the lookout this tax season for phishing schemes geared towards obtaining confidential information from unsuspecting individuals. What should users look out for? A common phishing attempt involves compromised or spoofed emails which purport to be from an executive at your organization and are sent either to human resources or finance/payroll employees. The email requests a list of employees and their related W-2 forms. That's not all, however. Another common scam (which can occur throughout the year) involves receiving a phone call from an individual claiming to be from the IRS (caller ID can be spoofed to show this as well) who informs you that you owe money for back taxes and often threatens law enforcement retribution if payment (usually via credit card over the phone) isn't provided. The IRS will never call you on the phone to report you owe them money nor demand money over the phone; they utilize the postal service for such notifications. They also will not engage in threats and are supposed to provide an opportunity for you to work constructively with them or negotiate payment. SEE: IT leader's guide to cyberattack recovery (Tech Pro Research) What standard protection methods should be used? The typical safeguards against phishing can protect you and your employees; establish a policy against requesting confidential information through email, call people directly to verify such requests, arrange for secure transfer of data, and limit the number of employees who possess the authority to access or handle W-2 forms. The IRS also recommends contacting them about any malicious activity. Phishing attempts can be reported to phishing@irs.gov. If someone from your company has given out W-2 information, contact dataloss@irs.gov with a description of what happened and how many employees were affected. Also make sure not to attach any confidential information! If your company is contacted by scammers claiming you owe the IRS money, report it via the IRS Impersonation Scam Reporting webpage. You can also call 800-366-4484. You should also report this to the Federal Trade Commission via the FTC Complaint Assistant on FTC.gov. What else is available to help here? Education and establishing proper procedures can be helpful in minimizing risk, but I also highly recommend using technology to safeguard data as well. While both technology and humans may be prone to failure, technology is harder to fool or take advantage of. With that in mind, data loss prevention (DLP) can be a handy tool in combating phishing gimmicks of this nature. DLP systems examine traffic coming in and out of an organization: emails, instant messages, web access - anything that is sent over the network. These systems can sniff out confidential information such as Social Security numbers and block them from being transmitted. This comes with a potential cost, however; legitimate traffic may end up blocked, such as when employees email tax information to their tax preparers or their own personal accounts. This can pose a challenge for DLP systems (and those responsible for administering them) in separating the wheat from the chaff. The end result is undoubtedly a slew of false positives with frustrated and/or confused employees. SEE: Intrusion detection policy (Tech Pro Research) Another potential solution is user and entity behavior analytics (UEBA). UEBA can determine the likelihood the employee is sending tax information to themselves via their personal email address by analyzing behavioral patterns to determine the legitimacy of specific activities. For example, if an employee named Ray Donovan sends a W-2 form from his corporate email address (ray.donovan@company.com) to his Gmail address (ray.donovan72@gmail.com), UEBA can determine that it's highly likely this information is being sent to the same person and will not send a critical alert nor block the transmission. It helps if Ray has a history of sending himself emails of this nature so UEBA can mark that behavior as normal. However, in a genuine phishing scenario where Ray sends a W-2 form to SWRedLeader55@gmail.com, an email address he has not previously contacted, UEBA could determine that it's not the same person, analyze further using behavioral comparisons and send alerts or take action as necessary. What about a situation where an employee is emailing confidential information to themselves when they shouldn't (such as someone else's W-2 form, or their own despite company policies prohibiting this)? UEBA can still send alerts which can then result in investigational activity and appropriate discipline as needed, including termination. Making employees aware that this activity is analyzed and monitored can serve as a deterrent and ensure confidential information remains in appropriate hands. For more security tips and news, subscribe to our Cybersecurity Insider newsletter. Subscribe Also see: "
" The article HTC VIVE announces price for the VIVE Pro VR headset, opens preorders was written on March 19 2018, in the domain of tech, which states: Building a slide deck, pitch, or presentation? Here are the big takeaways: HTC has announced its latest VR headset, the VIVE Pro, and has also opened up preorders for the $799 unit. The HTC VIVE Pro offers 78% increase in resolution over the previous VIVE model and is also capable of wireless connectivity using WiGig technology. HTC's new flagship VR headset, the VIVE Pro, is now available for preorder for $799. Included with the new VR headset is a six-month subscription to VIVEPORT, a VR gaming subscription service where subscribers can choose five titles from the service's catalog to rent at any given time. After the trial expires a VIVEPORT subscription will cost $8.99 per month, though purchasing a subscription prior to March 22nd will lock in the current rate of $6.99 per month, which will increase to $8.99 at that point. Along with the release of the VIVE Pro, HTC is reducing the cost of the currently available VIVE headset to $499, a reduction of $100. Purchasing the currently available VIVE includes a two-month subscription to VIVEPORT and a free copy of Fallout 4 VR. The VIVE Pro's capabilities The VIVE Pro will begin shipping on April 5, 2018, and is a considerable upgrade over the previous VIVE model, all without needing much in the way of upgrades to the PC that powers the headset (VIVE units aren't standalone). The VIVE Pro has dual OLED screens with a resolution of 2880x1600, a 78% increase in resolution over the current generation VIVE. It has a 90 Hz refresh rate and a 110 degree field of view and can be used with the current generation of controllers and base stations. SEE: New equipment budget policy (Tech Pro Research) The VIVE Pro VR headset is also WiGig compatible, meaning that users won't need to tether it to a computer or base station, provided they're willing to pay for a separate wireless module, which hasn't been priced or given a release date yet. HTC VIVE US general manager Daniel O'Brien said that the VIVE Pro is designed to deliver "the best quality display and visual experience to the most discerning VR enthusiasts," as well as offering a premium product to drive adoption of VR technology and products. Developers interested in becoming a part of HTC's vision for the future of VR can learn more about building applications for the HTC VIVE Pro at the VIVE developer's portal. Like other VR development platforms, VIVE makes use of Unity and the Unreal Engine and a proprietary SDK for building apps. Learn more about the latest tech trends by subscribing to our Next Big Thing newsletter. Subscribe Also see "
" The article Are smart locks secure? AV-TEST has the answer was written on December 10 2017, in the domain of tech, which states: Smart locks began appearing on doors when building automation and the Internet of Things (IoT) went mainstream. However, the public's acceptance of smart locks has been less than stellar—initial cost vs. actual benefits are seemingly the primary reason why. Image: Amazon The low adoption of smart locks may soon change if the powers that be at Amazon have their way. The company recently introduced Amazon Key, a remotely-controlled building-access platform—consisting of Amazon's Cloud Cam, a compatible smart lock, and smartphone app (shown to the right)—that allows Amazon-approved delivery personnel to open locked doors and leave deliveries inside the customer's home or office. A slew of additional conveniences not related to package delivery may also help the acceptance of smart locks. That said, the public's interest in smart locks will only improve if the benefits outweigh the costs, and the technology is proven to be physically safe and electronically secure. Security issues have already been reported about Amazon Key. Liam Tung in his ZDNet article Amazon: We're fixing a flaw that leaves Key security camera open to Wi-Fi jamming writes, "A malicious courier could easily freeze the Key's Cloud Cam and roam a customer's house unmonitored." Concerns about smart locks and security were raised way back in 2013. My TechRepublic article High-tech home security products: Who are they really helping? quotes several experts who question the security of smart locks and the technology supporting them. SEE: Internet of Things Policy (Tech Pro Research) AV-TEST put six smart locks' data security through their paces Knowing what experts were saying about smart-lock systems four years ago and the likelihood of smart locks becoming popular, the people at AV-TEST, an independent IT-security testing lab, decided to see if things have improved. The lab's engineers developed a test program and put these six smart locks through their paces: August Smart Lock (USA) Burg-Wachter secuENTRY easy 501 (Germany) Danalock V3 (Denmark) eQ-3 Equiva Bluetooth Smart Lock (Germany) Noke Padlock (USA) Nuki Combo (Austria) Test environments Data security was the first thing considered by the engineers with special emphasis on acquisition, storage, and transmission of data; the following image depicts how they employed Wireshark to capture traffic between the smart lock being tested and the controlling smartphone application. Besides communications, the team examined each system's hardware and software, tested the software-update process, and determined whether the associated smart-lock application had any security issues. Image: AV-TEST The results It seems smart locks have improved considerably in the past four-plus years. From the AV-TEST report: "Convenience does not have to mean less security. This reassuring conclusion can be made following the surprisingly strong results of the smart-lock testing." Concerning the test results, the test engineers offer the following insights. Installation: Despite physical differences, all smart locks evaluated by AV-TEST installed easily—systems manufactured by eQ-3 and Nuki being the easiest. Local communications: All tested smart locks are locally activated via Bluetooth. "As a standard feature, the smart locks use encryption, mostly AES with at least 128 bits," mentions the report. "Three locks, August, Danalock, and Nuki can encrypt at a higher rate—AES with 256 bits." The AV-TEST engineers report that smart locks by August, Danalock, and Nuki can integrate with local Wi-Fi networks; this allows location-independent remote control using the mobile device's smart-lock app. According to the report, neither Bluetooth nor SSL-encrypted Wi-Fi connections introduce any detectable vulnerabilities. Data protection: AV-TEST's engineers measured each smart lock's privacy policy against European data-protection law. One concern centered on whether systems save more data than what is needed to operate properly. From the report: "For August, Danalock, and Noke, the testers see a need for improvement, e.g., in terms of information on stored data and its use by third parties. An adaptation to European data-protection law would easily remedy these defects." SEE: Cybersecurity in an IoT and mobile world (free PDF) (ZDNet/TechRepublic special report) Smartphone-app security: The report warns that apps are a potential target for attackers, in particular how each app manages access permissions and log files. All smart-lock systems but August and Danalock handled access and log files adequately. The engineers are concerned that August and Danalock generate comprehensive debug logs that provide clues to how the app functions. Additionally, August keeps debug logs in a protected area, whereas Danalock does not, making it possible to read the log files using tools like Android Logcat. The report suggests both August and Danalock need to improve security in this area. One serious misstep: The AV-TEST paper took issue with the smart lock from Burg-Wachter because the lock system does not require the user to change the default admin password. "A dangerous complacency, as IoT devices with unchanged default login details are easy prey for attackers," mentions the report. Overall results Each smart lock was rated on local communications, external communications, app security, and data protection, with three stars holding top honors. The following graph shows the overall results. Image: AV-TEST On a positive note, the AT-TEST report notes, "All in all, it appears the manufacturers of smart door locks, unlike many other manufacturers of smart home products, did their homework." The report concludes by saying, "The AV-TEST Institute rated five out of six of the locking systems evaluated in the quick test as having solid basic security with theoretical vulnerabilities at the most." Stay informed about IT security news, tips, and tutorials—subscribe to our Cybersecurity Insider newsletter. Subscribe Also see "
" The article Top 5: Things AI might actually be good for was written on February 2 2018, in the domain of tech, which states: Artificial intelligence (AI) is increasingly mocked as being used as a marketing term. But AI is also being used to create some legitimately useful tools. So, to beat back some of the less useful uses of the term, here are five things AI might actually be good for: 1. Farming FarmLogs is an example of complex data analysis that tracks weather, soil conditions, historical satellite imagery and helps farmers determine what kind of plant growth to expect and how to maximize crop yields. SEE: Farming for the future: How one company uses big data to maximize yields and minimize impact (TechRepublic) 2. Medical diagnosis Watson made this use of AI famous, and while you can debate its effectiveness, others like Intel are working on things like precision medicine. Machine learning can compare molecular tests with previous cases to customize treatments. Computer interpretation of medical images as an aid to diagnosis is also making rapid advances. SEE: Beware AI's magical promises, as seen in IBM Watson's underwhelming cancer play (TechRepublic) 3. Stopping predators The National Center for Missing and Exploited Children is experimenting with AI to help automate and speed up scanning websites for suspicious content. SEE: IT leader's guide to deep learning (Tech Pro Research) 4. Recruiting AI can help sort through resumes and rank candidates. Unilever used an AI called HireVue to analyze candidates' answers body language and tone, cutting down time to hire and increasing offers and acceptance rates. SEE: How to implement AI and machine learning (ZDNet) | Download the report as a PDF (TechRepublic) 5. Customer service AI assistants were made famous by smartphones, but where they really shine is providing assistance to human customer service agents. AI can be used to process natural language and route people to the right agent and even listen in and prompt agents with queries and responses. We didn't even include autonomous cars, which use all kinds of machine learning and types of AI to interpret sensors. And there are loads more. There's a lot of fog around the idea of AI these days, but if you look closely you can see some pretty good examples of the real thing. For more about artificial intelligence and other innovations, subscribe to our Next Big Thing newsletter. Subscribe Also see: "
" The article How 5G will power innovations in VR and artificial intelligence was written on April 19 2018, in the domain of tech, which states: Manish Vyas, president of business communication at Tech Mahindra, spoke with TechRepublic's Dan Patterson about innovations that will be enabled by the arrival of 5G. Watch the video, or read the full transcript of their conversation below: Patterson: Help us understand, Manish, 5G we hear a lot of hype about. What's the reality of this new wireless standard? Vyas: The reality is that it does promise us to transform. You very rightly use the word digital, but my translation of digital is it promises to change the way people would live, work, and play going forward in a more significant fashion than what you saw with the previous generations. If I could just expand on that a bit, 5G is not just about the throughput and the speed and the power and the latencies, but 5G is about exciting, exciting propositions that will come our way both in the enterprise space and in the consumer domain. Given that 5G also combined with some of the other later technological innovations that are happening as we speak, for example, artificial intelligence, will just enable a certain set of use cases. It will just change the paradigm of how people communicate, how people consume experiences, or how people transact business. All of that is going to change, so I guess that's the reason why everybody is so hyped up, if I may, about 5G. Patterson: So, how? We know that the capabilities of 2G, 3G, and 4G has iterated and created new technological capabilities. What specifically about 5G enables IoT, enables high-speed mobile devices, and enables artificial intelligence? Vyas: Yes. I think it is, and all of them are related, the convergence of other software technologies that are advancing at the speed of light right now. Let's take two of them, just to build a use case, right? Let's take VR, virtual reality. Let's take IoT, which is the ability to connect the devices and harness the power of data, right? Now, combine that with the wireless advancements that will happen with the 5G technology from an access, as well as from how the data is processed. It will create use cases that have hereto not been possible. SEE: Virtual and augmented reality policy (Tech Pro Research) One of my favorite examples that I often give is think of an NFL game, and think of the tailgate parties that happen outside any stadium. There are any number of thousands of people who are perennial tailgate party goers. They don't even enter the stadium game after game, year after year, but they like to spend a lot of money outside the stadium. The experience that they will now get, imagine with the VR and with the discoverable aspect of the network that 5G does, where if you, Dan, for example, as a big fan of a certain running back of a certain NFL team, as you're partying with your buddies outside a certain stadium, you will be notified that something dramatic happened inside the stadium and with whatever device that is available at the time, which is also by the way advancing, with 5G and with the fact that you are discoverable by the network and the latencies and the availability of the network is like never before, the throughout is like never before, so the IoT on a camera or a device, the VR experience, powered by 5G, you will there and then and you will be the only person who will be able to stand almost right next to the running back and experience as the things explode and happen at the site. Just the sheer power of that use case is phenomenal, and the money that different people in the ecosystem will make out of it, including the telecom service provider, I think can be quite an interesting paradigm in my view. Patterson: What are some of the challenges or roadblocks to the rollout of 5G? Vyas: I think there are plenty. One of the biggest ones is going to be, without even getting into the technology aspect at this point, I would say is still a major part of the industry will still struggle to find the justification to invest in the capital, from a business case ROI standpoint. Now, on that one, one is also hearing as I go around the world and meet different CTOs and other executives of service providers, one is hearing that there are different ways of skinning that cat, if I may. The overall cost of 5G deployment is likely to be atmospherically high at this point, there are all indicators that the likely cost is going to be cheaper than what was in 4G, and if that happens, that itself is probably a business case-justification. SEE: IT pro's guide to the evolution and impact of 5G technology (free PDF) (TechRepublic) Of course, there is a bigger underlying assumption that the prices in the marketplace at least hold up, and if they drop, they drop only marginally and not dramatically. There's no guarantee of that, but at least that's a possibility. That's one challenge, clearly that the market is going to face. The second is going to be a more technical and execution challenge, which is the availability of the technology, the trials that need to go through in a very satisfactory fashion worldwide so that people get enough confidence to go and deploy the technology at a very large scale, which I don't think is a question of if, it is more a question of when. I guess the challenge would be more for delay rather than really making it happen. Patterson: Manish, I think that that is a great point. I wonder if you could leave us with say, the next 18- to 36-months in the rollout of 5G. Where are we in a year, year and a half, and where are we in three or four years? Vyas: I would say in three to four years time, I would be surprised if the world is not entirely enabled by 5G. When I say that with a sense of responsibility that there would be still be a certain set of companies that may not adopt it, because of how they would want to position it, which would be challenging, and they would be under tremendous pressure, but I believe that in the next three to four years, as the other technologies also evolve, the other software technologies, I believe in the next three to four years, we will see a very large scale deployment worldwide. In the imminent short-term, the 18, 20 months, I think we will see the early adopters clearly making progress. This year alone, we might see some of the major tier-one service providers in the North American continent, we will see them doing about 12 to 15 trials. By that, I mean 12 to 15 locations or cities would be 5G-enabled by the end of the year. For more about the latest tech innovations, subscribe to our Next Big Thing newsletter. Subscribe Also see "
" The article Five programming languages with hidden flaws vulnerable to hackers was written on March 13 2017, in the domain of tech, which states: Writing bug-free software is practically impossible, due to the impracticality of predicting every way in which code might be executed. But even if developers go above and beyond to avoid flaws that can be exploited by hackers, attackers can often still take advantage of vulnerabilities in the design of the underlying programming language. At the recent Black Hat Europe conference, IOActive security services revealed it had identified flaws in five major, interpreted programming languages that could be used by hackers in crafting an attack. "With regards to the interpreted programming languages vulnerabilities, software developers may unknowingly include code in an application that can be used in a way that the designer did not foresee," it writes. SEE: Hiring kit: Python developer (Tech Pro Research) "Some of these behaviors pose a security risk to applications that were securely developed according to guidelines." These are the five programming languages and the flaws that were identified: 1. Python Currently enjoying a surge in usage, Python is regularly used by web and desktop developers, sysadmin/devops, and more recently by data scientists and machine-learning engineers. The IOActive paper found that Python contains undocumented methods and local environment variables that can be used to execute operating-system commands. Both Python's mimetools and pydoc libraries have undocumented methods that can be exploited in this way, which IOActive used to run Linux's id command. 2. Perl Popular for web server scripting, sysadmin jobs, network programming and automating various tasks, Perl has been in use since the late 1980s. IOActive highlights the fact that Perl contains a function that will attempt to execute one of the arguments passed to it as Perl code. It describes the practice as a "hidden feature" within a default Perl function for handling typemaps. 3. NodeJS NodeJS provides a server-side environment for executing JavaScript, the language commonly used for scripting in web browsers. IOActive found that NodeJS' built-in error messages for its require function could be exploited to determine whether a file name existed on the machine and to leak the first line of files on a system—potentially useful information for an attacker. 4. JRuby The Java implementation of the Ruby programming language was found to allow remote code execution in a way that isn't possible in Ruby as a base language. By calling executable Ruby code using a specific function in JRuby, IOActive was able to get the function to execute an operating system command, the Linux command id, by loading a file on a remote server. 5. PHP The venerable server scripting language was used to call an operating system command, again the Linux command id, using the shell_exec() function and exploiting the way PHP handles the names of constants. "Depending on how the PHP application has been developed, this may lead to remote command execution," say researchers. That said, many web admins have long known the potential risk posed by PHP's shell_exec() function, and how to disable it. Exploitable flaws in each programming language were identified using a tool called a differential fuzzer, which was designed to automatically find vulnerabilities. The fuzzer works by running through a large array of scenarios in each language, calling each of the languages' native functions with a wide variety of different arguments and observing the results. Keep up to date on all of the newest tech trends. Click here to subscribe to the TechRepublic Next Big Thing newsletter. Subscribe Also see "
" The article How the blockchain could help build a decentralized media economy was written on February 28 2018, in the domain of tech, which states: Jarrod Dicker, CEO of Po.et, talked with Dan Patterson about how his company uses the blockchain to document media assets. Watch the video, or read the full transcript of their conversation below: Patterson: The decentralized media economy is here. What the heck is a decentralized media economy? Well, surprise, surprise, it runs on the blockchain. ... You have worked at The Washington Post, as well as RebelMouse and other media companies. First, why a startup? Why a blockchain startup? And, why Po.et? Dicker: Three of the most frequent questions I get. So why a startup? I think my past three years at The Washington Post have been somewhat startup in a legacy media news organization, and the fact that we were run by Jeff Bezos, who owns the Post. It was an amazing time for news, right, and all news media. There was a lot of momentum and excitement behind all the work that we were doing. Especially when we took it to the tech side. And one thing that I've always focused on throughout my career was how do we build a better model for the business of media, which is what I found extremely hard to do. It's also extremely hard to do within just one media company. So over the past few years, I've been building something at The Washington Post called RED, Research Experimentation Development, where we were building new technologies and systems to use on the Post, off the Post, licensed white label to really help build a better media economy. Then I started really understanding not just the technologies behind blockchain, but the philosophies behind them. The idea of consensus and decentralization. Coincidentally, Po.et approached me in terms of going there, becoming the chief executive officer, and building the team. We could get into how that all went down, but that's how I ended up here now. Patterson: So the blockchain, this is possibly one of the most hyped technologies in recent memory. But it really is useful when you have to stamp a piece of asset, whatever that asset is. No matter whether it's gold and diamonds, or housing and real estate, or in this case, content. So a lot of us in media on the backend, we have content management systems and you can see a log of activity that is not necessarily for the public, but you can see what happens to a particular asset over time. Why is a blockchain good for public pieces of media? And when I say assets, in this case we're referring to a post, or a video, or an audio podcast. Why is the blockchain so good at documenting those types of assets? Dicker: Yeah, I think for one, a quick answer is the idea of immutability, right. The idea that something could be permanent and stored there. Mainly for attribution, being able to check to see who the owner of an idea is, whose authorship of a certain idea, and just make sure that that cannot be manipulated. I think that that's extremely important, but scary for a lot of media companies. I've had a couple conversations where folks in media have said, "Well, that's terrifying because what if I need to redact a statement?" Right, or what if I need to take something down for advertisements? And where I think we'll see the value of blockchain in media is also like philosophically how we change the way that we think and build content and creative ideas in the media landscape. What I mean by that is right now, we are used to a web, where we can delete, where we can change, and where we can alter. And what's really happened there is that people throw ideas out there, knowing that if they are wrong, they could change or delete, or things are not immutable, right, per se, which is what's happening on the blockchain. What I really hope beyond just the products of what it can do, is really that it'll change the way that we do things, right. Like in society today, there are consequences for things that you do. You can't just press the delete button or change, right. So you think about these sort of things before you actually act on them. And I think what we've seen in the media landscape is that people are very quick to send ideas or shoot ideas out there, knowing that they can hide behind something, or knowing that they could change something. And what does it really mean when you need to kind of account for those ideas, and when they are living on a blockchain forever? So one of which, when it comes to media on the blockchain, I think is this idea of security, immutability. SEE: What is blockchain? Understanding the technology and the revolution (free PDF) (TechRepublic) This new notion that you are now in control of your ideas and of your assets, both in a public and private, public key, private key manner. And being able to take control of that, right. We at Po.et, really want to focus more on the long tail creators, like we are accessible for everyone. We aim to be a library of the world's creative assets on the blockchain and are building different ways for access and people to do so. Whether that's through, as you mentioned, a content management system, or going directly to Po.et, or within your own domains. But once you do that, what can you actually do on the domain? And I think there's two major things. One is really being able to own and manage your ideas. So how do you issue a content license? What should that cost? How can you make sure that your ideas are being heard and transacted on? An example that I give often and we see it every single day on Twitter, is that if you put something up on Twitter, which is a piece of media that can go viral beyond just the at replies, there's also Huffington Post, and BuzzFeed, and others saying, "Hey, Dan. I love this. Do you mind if I use this?" And at that moment, you really have already put it on Twitter and you just assume, "Sure, right. It's already out there." Well, what if you had the opportunity to say, "Oh, there's clearly value in this content I'm creating. Sure, here's my private key with the smart contract and rules that you need to abide by if you want to license and leverage this content." So that is one thing that I think is extremely important that we'll learn both in terms of changing a behavior about applications of using blockchain for media. The other is this idea of transparency, as you mentioned, and reputation, right. In everything that we do in life, reputation matters. Like if you go to a hotel, you're going to look at the ratings. If you ride in an Uber, you look at ratings. Every single thing is based on that transparency and reputation. Even if you're going to eat something, right. Truth could be subjective, information could be subjective. Like the same way that you would look at maybe the back of a carton of milk and say, "Okay, there's a lot of fat, but I'm cool with it." Or, you may say, "No, like I don't want all that fat." But when it comes to information and transparency, that doesn't really exist. So I think this idea of building reputation for creators, not just to say, "That this creator tells the truth and this creator doesn't tell the truth." But just give transparency as to the way that they navigate it- Patterson: Yeah, ownership and the attribution- Dicker: Exactly. Patterson: Yeah. Dicker: Exactly, and be able to see, has it been fact checked? Right, who has seen it? Has this person published before? They've taken this picture in Sri Lanka, are they actually in Sri Lanka? Right, or are they in New Jersey? So I think that sort of information really is applicable, especially nowadays, when it comes to deep fakes and people putting information out there, to really just get more exposure and transparency to the end consumer, so that they can make decisions on their own based on what they're consuming. Patterson: Yeah, so the obvious use cases could be say, iStock photo and allowing creators to control the assets that are there. But also social media, as well as news content, or almost any piece of content where an exchange of trust has to happen. And really what you're talking about with transparency, is the exchange of trust equity. And being able to say, "This is the process by which a particular asset exists in an ecosystem, and you can vet and verify it using the blockchain." So you use the Ethereum blockchain. Dicker: We use the ERC-20. Patterson: Yeah, so explain to me how that applies to Po.et, not just in terms of content, but you are building a content library. Why did you elect to use Ethereum? What are some of the advantages of that? And, is this the next emerging hot blockchain? Dicker: So I want to back up and just make it clear. We stamp on the Bitcoin blockchain because it's the most secure blockchain. We leverage Ethereum for ERC-20 tokens because Po.et is a token dynamic-type model. In that, to build reputation, we need to build incentive systems, and that's why the Po.et token exists to help fuel and challenge the marketplace in order to get the best content up and the worst content down. I think with Ethereum, the most interesting thing that's happening is that they are opening up with these ERC-20 tokens and beyond, the opportunity for people to build their technologies on top of it. And that's where you've kind of seen a huge movement towards that blockchain and the opportunities of that blockchain. There's also companies like ConsenSys out in Bushwick, that are building right on top of this, companies for every single pocket of the industry that could leverage blockchain for proof, right, of value and really what it could really become. That is definitely happening to answer your question. Like that trend is moving and there's a lot of accessibility there. Now that being said, we're way off, right, from building scalable blockchain solutions when it comes to enterprise and conversations that we would have had three years ago when I was at The Washington Post in talking about technologies built there. So that is somewhat exciting because I don't believe that we are dead set on one blockchain for anything. SEE: Blockchain: An insider's guide (free PDF) (TechRepublic) Po.et's ambitions and the partnerships that we have and others that are building on top of the work that we're doing, are kind of opening up the same opportunity. Look, Po.et is build a protocol that is helping build reputation on the web. And if you want to help leverage that, to help build reputation within your applications, then you should build on Po.et. There's a company called Encrypt that's building on top of Po.et, which is really looking to circumvent censorship when it comes to information delivery. So if you are posting an article or have a website, and you are concerned that your IP could be blocked, how can you leverage Encrypt to make sure that that information is then spread out in a bunch of different devices and not just one central server? To then be able to come back and have that information delivered, and that's something that's being built on top of Po.et. There's other conversations of people in media that are looking for certain solutions that want to build on top of what we're doing, as well. So we leverage Bitcoin blockchain just because it's an easy marketing example to say, "Look. If you value your money and you trust Bitcoin blockchain because money is the most important thing to you at this moment, well, your ideas are equally important, right? And if your ideas are important, you want them to be as secure as your money." But there are opportunities for us to port to either. There's opportunities for us to mimic or build our own. I think it's still early days, but the biggest thing that we're looking to do now is influence a space that could benefit from new thinking, right, the business of media really trying to figure out how to strengthen media companies, how to allow them to evolve, how can get we give power back to creators, right? How do we give velocity to independent thought and a platform, where people could drive revenue and earn what they should earn, based off their ideas? An analogy ... Or, sorry. I hate sports analogies, but an analogy- Patterson: I love sports analogies- Dicker: And hopefully, the audience does, as well. But the way that free agency works in sports, right, does not really exist when it comes to media or content. So each writer is somewhat valued the same. I mean, we look at certain nuances like Twitter followers, or tenure, right, or work that they've completed. But in sports, you have this opportunity where every single quarterback is rated, right, in front of the public based on their skillset, based on their salaries, based on all these different set of criteria. In media, that really doesn't exist, right. So what if we can build these sort of valuations and services to set up, so that we can actually pinpoint the value of content and the value of media that everyone is creating, which personally, I believe is extremely undervalued right now. And I'm sure you agree, as well, and anyone who works in this space constantly is telling people that, "Content cost money. Content is valuable. It costs a lot to create great content. If you want good content, you need to pay for it." But I think what we need to do is start proving these things, and in order to prove these things, we need to set up dynamics and platforms in order to do so, and that's kind of what we're doing here. For more on blockchain and other world-changing technology, subscribe to our Next Big Thing newsletter. Subscribe Also see: "
" The article Windows 10 wishlist: Five gripes Microsoft needs to take seriously was written on October 24 2017, in the domain of tech, which states: Microsoft is constantly tinkering with Windows 10, dropping in new features and swapping out old ones, but there are a few annoyances it seems unable or unwilling to fix. What ties most of the following complaints together is Microsoft's reluctance to let users choose for themselves, preferring instead to try to coerce users and control how they use their computer. Here are the five ways Windows 10 is broken that Microsoft needs to sort out. 1. Sort out the Control Panel / Settings app confusion Windows 10 adopts a rather confused approach to managing settings—splitting the options between the legacy Control Panel and the Settings app. Microsoft appears to be in the process of gradually migrating these options to the new Settings app, with each big feature update further diminishing the role of the Control Panel. However, having to juggle between the two menus is not particularly user friendly, and the changes in where settings lie is particularly aggravating for some users, as can be seen by the large number of forum posts this issue generates. You can use the Search function to locate the Settings you need, but there are still clearly a large number of users who still struggle to locate what they're looking for. 2. Give all users control over updates All Windows 10 users should be given control over when updates are applied. Currently there is no simple option for Windows 10 Home users to defer updates in the same way there is for users of Windows 10 Pro and Enterprise editions. SEE: Windows 10: Streamline your work with these power tips (free TechRepublic PDF) Users of non-Home editions can toggle options in menus to put off updates for months at the very least. However, Home users have to engage in hacky workarounds, such as setting their connection to 'Metered', which can have unwanted side effects due to Windows no longer downloading most Windows updates or Windows Store app updates. Microsoft should just relent and give the Home edition the same level of control over updates as is available to Pro users. 3. Allow users to opt out of feature updates altogether Not everyone appreciates Microsoft's twice-yearly feature updates messing with their desktop, and, for some, the smattering of new features are, at best, unnecessary. Microsoft should give all users the option to completely opt out of feature updates—the most recent being the Windows 10 Fall Creators Update— and instead only receive essential patches and fixes. As it is, users of Pro and Enterprise can defer feature updates for more than a year, so why not go one step further and let everyone opt out altogether. Does it really make sense for people who don't have the slightest interest in virtual reality to suddenly find their computer has a Mixed Reality Portal? There's even already precedent for the change, with Microsoft recently revealing that PCs with unsupported Intel Atom CPUs would not receive any feature updates post last summer's Anniversary Update. 4. Stop trying to force Bing and the Edge browser on users While makes sense for Microsoft to build an ecosystem of linked services, from both a practical and commercial point of view, it would be nice if Microsoft let users choose their sniearch engine when using Windows built-in search feature. Microsoft says that locking the Search function to Bing and its Edge browser is necessary to ensure the best possible experience for Windows 10 users. But given the relatively limited market share of Bing and Edge, it's clear that many users prefer competing products and services, so again it would be good if Microsoft would allow users to use their search engine or browser of choice. 5. Stop pushing the Microsoft Store so hard until it's better stocked Microsoft is determined to get more people to use the Microsoft Store, whether by locking Windows 10 S to using Store apps, or by releasing Store exclusives. However, despite launching in 2012, the Store's selection of apps is still fairly lacklustre, especially compared to the unfettered selection of software available for the Window desktop. Microsoft faces a classic chicken and egg problem: without the userbase, it won't get the apps, but without the apps, you can't attract the userbase. Trying to forcibly create an audience by creating an OS locked to the store isn't the answer, however, all it does is highlight just how sparse the offerings in the Microsoft Store are. Be your company's Microsoft insider with the help of these Windows and Office tutorials and our experts' analyses of Microsoft's enterprise products. Subscribe to our Microsoft Weekly newsletter. Subscribe More on Windows 10 Fall Creators Update "
" The article The top 5 iPhone X gestures every user should know was written on November 12 2017, in the domain of tech, which states: The iPhone X is built with gestures in mind, taking MultiTouch to the next level as it's now the main way to interact with the iPhone. Doing things as simple as double-tapping the Home button to show the App Switcher, using Reachability for items at the top of the screen, and Force Quitting apps has changed. These are the top five gestures that you need to know to take full advantage of the iPhone X. SEE: Mobile device computing policy (Tech Pro Research) 1: How to access the App Switcher on the iPhone X On previous versions of iOS hardware, accessing the App Switcher to swap to another app or force quit an app was as simple as double-tapping the home button; however, with iPhone X, the home button is no more. To access the App Switcher—whether you're in an app or on the Home Screen—you'll use the Home gesture (swipe up from the bottom), except you'll stop halfway up the screen and pause. The view will change to the App Switcher you know and love (Figure A). Figure A 2. How to force quit apps on the iPhone X Inside of the App Switcher, you may be wondering how to force quit an app, because in this new switcher, swiping up does not quit the app. To force quit an app, launch the App Switcher, then tap and hold on an app. This will enter editing mode where you can either choose the "-" button that appears in the corner of each open app, or swipe up as you would on a non-iPhone X iOS device. As we've mentioned in a previous article, you only need to Force Quit unresponsive apps. There is no need to force quit apps on a regular basis. 3. How to quickly swipe to the previously used app on the iPhone X The Home Indicator at the bottom of the screen gives you many capabilities at a single tap or swipe. Swiping from left to right on the Home Indicator will launch the previously used app from the App Switcher without the need to open the App Switcher. This feature is very useful on the iPhone and greatly improves multitasking capabilities because it lets you jump between apps quickly and efficiently. SEE: Cybersecurity in an IoT and mobile world (ZDNet special report) | Download the report as a PDF (TechRepublic) 4. How to enable Reachability on the iPhone X Reachability is an accessibility feature that can be enabled on previous iPhone models by double-tapping on the Home Button to slide the top of the screen down by half the screen to make top items more reachable while holding the device with one hand. With the Home Button gone, this feature has changed slightly and is not enabled by default. To enable Reachability in iOS 11.1.1, follow these steps. Open the Settings app. Navigate to General | Accessibility. Enable the option for Reachability (Figure B). Figure B To use Reachability once it's enabled, swipe down on the Home Indicator at the bottom of the screen. You'll see the current app slide half way down the screen, giving you easier access to reach the items at the top of the screen. 5. How to manage the iPhone X's Home Screen On an iPhone X, you may be wondering how to exit out of Home Screen arranging mode (aka jiggly mode). First, enter this mode by tapping and holding on an icon or folder on the Home Screen. The icons will begin wiggling, and you can rearrange them. To exit this mode, swipe up from the bottom like you would to exit to the Home Screen from an app. In iOS 11.1.1, a Done button appears in the top right corner of the screen in the status bar area—Tapping this button will also exit editing mode. For more iPhone tips and tricks, subscribe to our Apple Weekly newsletter. Subscribe Also see "
" The article HPE and NASA experiment with 'Spaceborne Computer,' a supercomputer that could help us get to Mars was written on August 14 2017, in the domain of tech, which states: Image: HPE Hewlett Packard Enterprise (HPE) and NASA will partner to send a supercomputer to space, the companies announced in a blog post on Monday. The "Spaceborne Computer" will be sent up to the International Space Station (ISS), first by being launched on the SpaceX CRS-12 rocket, and then sent via the SpaceX Dragon Spacecraft. It will be a year-long experiment, with aims to eventually land on a mission to Mars—a trip of the same length. Why? Advanced computing, currently done on land, could help astronauts survive in gruelling conditions by allowing for processing of information in real-time in space. The Spaceborne Computer comes equipped with the HPE Apollo 40 class systems, the blog post stated, which includes a high-speed HPC interconnect running an open-source Linux operating system. Importantly, this system could eliminate communication latencies, which can take up to 40 minutes, and can "make any on-the-ground exploration challenging and potentially dangerous if astronauts are met with any mission critical scenarios that they're not able to solve themselves," Alain Andreoli, SVP and GM of HPE's data center infrastructure group, wrote in the blog post. A computer this advanced has never run in space before, since most computing systems are not developed to survive in brutal conditions that include factors such as radiation, solar flares, micrometeoroids, unstable electrical power and irregular cooling. However, the software on this computer was developed to withstand these types of conditions, and its water-cooled enclosure for the hardware was created to help keep the system safe. The project, Andreoli wrote, has implications beyond what it can do for a voyage to Mars. "The Spaceborne Computer experiment will not only show us what needs to be done to advance computing in space, it will also spark discoveries for how to improve high performance computing (HPC) on Earth and potentially have a ripple effect in other areas of technology innovation," Andreoli wrote. SEE: How Mark Shuttleworth became the first African in space and launched a software revolution (PDF download) The 3 big takeaways for TechRepublic readers: On Monday, a blog post by HPE announced a partnership with NASA that will send a "Spaceborne Computer"—a supercomputer—on a year-long experiment to the ISS. The project is intended to eventually end up on a voyage to Mars—which also takes a year—with the intention of helping astronauts perform high-level computation in space, which could eliminate the current lag-time in communication between space and Earth. The project is intended to have broader implications for the kind of advanced computing that can be done in space. Find out The Next Big Thing from TechRepublic. Subscribe Also see "
" The article Apple's FileVault 2 encryption program: A cheat sheet was written on March 19 2018, in the domain of tech, which states: Apple's FileVault encryption program was initially introduced with OS X 10.3 (Panther), and it allowed for the encryption of a user's home folder only. Beginning with OS X 10.7 (Lion), Apple redesigned the encryption scheme and released it as FileVault 2—the program offers whole-disk encryption alongside newer, stronger encryption standards. FileVault 2 has been available to each version of OS X/macOS since 10.7; the legacy FileVault is still available in earlier versions of OS X. This comprehensive guide about Apple's FileVault 2 covers features, system requirements, and more. We will update this article if there's new information about FileVault 2. SEE: Encryption Policy (Tech Pro Research) Executive summary What is FileVault 2, and how does it encrypt data? FileVault 2 is a whole-disk encryption program that encrypts data on a Mac to prevent unauthorized access from anyone that does not have the decryption key or user's account credentials. FileVault 2 is a whole-disk encryption program that encrypts data on a Mac to prevent unauthorized access from anyone that does not have the decryption key or user's account credentials. Why does FileVault 2 matter? Encryption of data at rest or stored on a disk is often the last resort to ensuring that data is protected against unauthorized access. The recent high-profile security breaches make it even more important to know about encryption programs such as FileVault 2. Encryption of data at rest or stored on a disk is often the last resort to ensuring that data is protected against unauthorized access. The recent high-profile security breaches make it even more important to know about encryption programs such as FileVault 2. Is FileVault 2 available to all macOS users? All macOS users can enable FileVault 2 to protect their data. Some users running more recent versions of OS X can also enable disk encryption, while others using older versions of OS X will only be able to utilize legacy FileVault, which encrypts just their home folder. All macOS users can enable FileVault 2 to protect their data. Some users running more recent versions of OS X can also enable disk encryption, while others using older versions of OS X will only be able to utilize legacy FileVault, which encrypts just their home folder. What are the pros and cons to using FileVault 2? Some of the pros include it supports legacy hardware, and deployment may be locally or centrally managed by users or the IT department. One con is enabling FileVault 2 can have a negative impact on I/O performance of approximately 20-30% of modern CPUs. More pros and cons are detailed in this article. Some of the pros include it supports legacy hardware, and deployment may be locally or centrally managed by users or the IT department. One con is enabling FileVault 2 can have a negative impact on I/O performance of approximately 20-30% of modern CPUs. More pros and cons are detailed in this article. What are alternatives to FileVault 2? The main competitors are VeraCrypt, BitLocker, GnuPG, LibreCrypt, and EncFS. The main competitors are VeraCrypt, BitLocker, GnuPG, LibreCrypt, and EncFS. How can I get FileVault 2? FileVault 2 is baked in to all versions of macOS and supported versions of OS X. The encryption program is turned off by default, though it's easy to enable. Additional resources What is FileVault 2, and how does it encrypt the startup disk on Macs? FileVault 2 is an encryption program created by Apple that provides full-disk encryption of the startup disk on a Mac computer. By utilizing the latest encryption algorithms and leveraging the power and efficiency of modern CPUs, the entire contents of the startup disk are encrypted, preventing all unauthorized access to the data stored on the disk; the only people that can access the data have the account credentials that enabled FileVault on the disk, or possess the master recovery key. By enabling FileVault 2's whole-disk encryption, data is secured from prying eyes and all attempts to access this data (physically or over the network) will be met with prompts to authenticate or error messages stating the data cannot be accessed—even when attempting to access data backups, which FileVault 2 encrypts as well. Additional resources Why does FileVault 2 matter? FileVault 2, in and of itself, cannot prevent users from attacking your system or otherwise exfiltrating the encrypted data. The encryption program is not a substitute for proper physical, logical, and data security standards, but rather a part of the overall puzzle that makes up your device's security. Data encryption is often seen as the last resort because, if all other security features in place are compromised, encrypted data will still be unreadable by everyone except people that have the decryption key, or those that can brute-force their way past the algorithm, which is easier said than done. SEE: All of TechRepublic's cheat sheets and smart person's guides If the encryption standard in place is properly implemented and uses a strong, modern algorithm, and the recovery keys are not accessible or consist of a long, random key space, the attackers will have their work cut out for them. If the attackers gain access to the data sitting on the disk, they may be able to copy it, take it off your network, and even attack it directly, but they'll still be at an impasse if they cannot crack the encryption. And if the attackers cannot crack the encryption, your data will remain unreadable, and subsequently, of little to no real use or value. Additional resources Is FileVault 2 available to all macOS users? Users running OS X 10.7 (Lion) or later, all the way through the current version of macOS 10.13 (High Sierra), may enable and fully utilize the full-disk encryption capabilities of FileVault 2 on their desktop or laptop Mac computers. By default, the feature is disabled; however, it only takes accessing the System Preferences and clicking the Turn On FileVault 2 button to enable the feature and encrypt your whole disk. Encryption may be enabled by the user or managed by the administrators for company-owned devices. Administrators have set policies via Profile Manager and/or scripts that will enable FileVault 2 during deployment and implement institutional recovery keys that the company manages in order to recover encrypted data per device, if needed. SEE: Essential reading for IT leaders: 10 books on cybersecurity (free PDF) (TechRepublic) Once FileVault 2 is enabled, only the user with administrative privileges that enabled FileVault 2 with their account may decrypt the drive's contents. Additionally, a master recovery key is created during the initial process; users with either of those keys may be the only ones to decrypt the volume and read the contents of the drive. Additional resources What are the pros and cons to using FileVault 2? The pros to using FileVault 2 It's a native Apple solution that is designed by Apple for Apple computers. FileVault 2 supports legacy hardware, even for devices that are no longer officially supported by Apple. Deployment of FileVault 2 may be locally or centrally managed by users or the IT department. Whole-disk encryption works to safeguard all data stored on disk now and in the future. Backup of encrypted data works seamlessly with Time Machine to create automated backup sets. Disks encrypted with FileVault 2 must first be unlocked by user accounts that are "unlocked enabled"; these are typically accounts with administrative privilege, preventing non-admin accounts from accessing the disk's contents, regardless of the ACL permissions configured. FileVault 2 uses a strong form of block-cipher chain mode, XTS, based off the AES algorithm using 128-bit blocks and a 256-bit key. The cons to using FileVault 2 Legacy FileVault (or FileVault 1) does not encrypt the whole-disk—only the contents of a user's home folder. This affects legacy hardware that do not support the features in FileVault 2. Backing up encrypted data with Time Machine can only be done when a user is logged off of the session. For on-the-fly backups, the destination path must be a Time Machine Server, which requires macOS Server to perform online backups. The encryption passphrase used to encrypt the disk is the same as the end-user's password that enabled FileVault 2. If the password becomes compromised, the disk may be encrypted and data may be compromised. Enabling FileVault 2 can have a negative impact on I/O performance of approximately 20-30% of modern CPUs, and it noticeably worsens performance on older processor hardware. If the passphrase or recovery key must be changed, the entire volume will need to be decrypted and have the encryption process run again with the new key. Any device with FileVault 2 enabled must be unlocked by an admin credentialed account prior to being accessed or used by a non-admin account. If the device is not unlocked, non-admin accounts will not be able to use the computer until it is first successfully unlocked. Individual files, folders, or any other kind of data cannot be encrypted on the fly. Only data that resides on the local disk or FileVault 2-encrypted volumes may be encrypted in their entirety. Additional resources What are some of the alternatives to FileVault 2? VeraCrypt is a free, open source disk encryption software that provides cross-platform support for Windows, Linux, and macOS. It was derived from TrueCrypt, which was a full-disk encryption application that discontinued support by its creators after a security audit revealed several vulnerabilities in the software. Having acquired the use of TrueCrypt, VeraCrypt forked the former app and corrected the vulnerabilities, while adding some changes to strengthen the way in which the files are stored. VeraCrypt creates a virtually encrypted disk within a file and mounts it as a disk that can be read by the OS. It can encrypt the entire disk, a partition, or storage devices, such as USB flash drives and provides real-time on the fly encryption, which can be hardware-accelerated for better performance. It also supports TrueCrypt's hidden volume and hidden operating system features. BitLocker is Microsoft's full-disk encryption featured in supported versions of Windows Vista and later. Using default settings, BitLocker uses AES encryption with XTS mode in conjunction with 128-bit or 256-bit keys for maximum protection, especially when leveraged with a TPM module to ensure integrity of the trusted boot path, which prevents many physical attacks and boot sector malware from compromising your data. When used on a computer in an Active Directory environment, BitLocker supports key escrow, which allows the Active Directory account to store a copy of the recovery key. In the event that data needs to be recovered, administrators may retrieve the key. GnuPG is based on the PGP encryption program created by Phil Zimmermann, and later bought by Symantec. Unlike Symantec's offering, GnuPG is completely free software and part of the GNU Project. The software is command-line based and offers hybrid encryption by use of symmetric-key cryptography for performance, and public-key cryptography for the ease of exchanging secure keys. While the lack of GUI may not be for everyone, the program's flexibility allows for signed communications, file encryption, and, with some configuration, disk encryption to protect data. Dubbed the universal crypto engine, GnuPG can run directly from the CLI, shell scripts, or from other programs, often serving as a backend for other applications. LibreCrypt is a transparent full-disk encryption program that fully supports Windows and contains partial support for Linux distributions. It is open source and has an online community of users that are committed to resolving issues and introducing new features. Often cited as the most easy to use encryption program for Windows, it can create encrypted containers as well, mounting them as removable disks in Windows Explorer for easy access. It addition to the multitude of supported encryption and hashing standards and modes, it also supports smart cards and security tokens to authenticate users, and decrypts data at the file level, partition, or for the entire disk. EncFS is an encrypted filesystem that runs in the user-space, using the FUSE library. The FUSE library acts as an interface for filesystems in user-space that allows users to mount and use filesystems not natively supported by the host OS. FUSE/EncFS are open source releases and support Linux, BSD, Windows, Android devices, and macOS. It is also available in a number of languages, as it has been translated by community members. With active community support on GitHub and regular updates, EncFS offers users the ability to create a filesystem that can be mounted and used to store secure data files, and then it may be unmounted to protect against offline attacks and unauthorized user access. Additional resources How can I get FileVault 2? FileVault 2 is in all versions of OS X from 10.7 through macOS 10.13—it just needs to be enabled, as the service is turned off by default to allow end users to perform the initial setup process, which allows them to create a master recovery key. This key will act as a backup in the event that they become locked out of their account and must recover data via an alternate path. Users of OS X prior to 10.7 may use Legacy FileVault, or FileVault 1 (the initial offering of the encryption application), which only encrypts a user's home folder and not the entire disk. This must be enabled per user on that device and will still leave any data not stored within an encrypted home folder available to unauthorized access. The good news is that as long as your Apple computer supports a recent version of OS X or the modern releases of macOS, you can upgrade your Mac's operating system at anytime to a newer version to enjoy the benefits of FileVault 2's enhanced security. Additional resources For the latest IT security news, tips, and downloads, subscribe to our Cybersecurity Insider newsletter. Subscribe "
" The article How Google Fiber turned 2017 into its comeback year was written on October 18 2017, in the domain of tech, which states: Google Fiber showed new life in 2017, after a near death experience in late 2016. The fiber internet pioneer launched in three new cities—Huntsville, AL, Louisville, KY, and San Antonio, TX—this year. It also began to heavily rely on shallow trenching, a new method of laying cables, to expedite the construction process. SEE: Photos: How Google Fiber is using 'shallow trenching' to outbuild its gigabit rivals "We're very pleased with the response from residents in these markets—along with our other existing Google Fiber cities, where we worked hard throughout the year to bring Fiber service to even more people in many more neighborhoods," a Google Fiber spokesperson told TechRepublic. The comeback happened after a construction halt and the CEO stepping down in October 2016, which left some wondering if Fiber was on its last breath. SEE: Internet and Email usage policy (Tech Pro Research) But 2017 wasn't entirely a year of redemption. In February, hundreds of Fiber employees were moved to new jobs at Google. And Gregory McCray left the role of CEO in July after only holding the position for five months. And internet experts still have their doubts. Chris Antlitz, a senior analyst at Technology Business Research, labelled Fiber's year as "not very good." Jim Hayes, president of the Fiber Optic Association, called Google Fiber a "very distant player" in the fiber market. However, Antlitz added that, for Alphabet—the parent company of both Google and Google Fiber—that means they're just not growing as fast as they wanted to. Google Fiber has still had an impact this year, he said. Fiber set a new bar for broadband by showing incumbent internet service providers (ISPs) that it is economically feasible to bring 1 gigabit internet to consumers, Antlitz said. Since Google Fiber led a connectivity renaissance in 2011 when it launched in its first city, Kansas City, KS, top telecom providers have been in an arms race to upgrade their broadband pipes to accommodate 1 gigabit, Antlitz said. Google Fiber's presence in the market has caused competition that has forced other fiber providers like Verizon and AT&T Fiber to offer cheaper, faster service. Adding a second provider to a market can reduce prices by around one-third, according to a study by the Fiber to the Home Council. SEE: Google Fiber 2.0 targets the city where it will stage its comeback, as AT&T Fiber prepares to go nuclear AT&T has been particularly competitive, analysts say. They've been expanding in current and prospective Google Fiber cities, including adding new neighborhoods in San Antonio months before Google Fiber arrived. In Louisville, AT&T sued the Louisville Metro Government over its "One Touch Make Ready" ordinance, which allows Google to use existing poles to install its technology without permission from the telecom company that owned the poles. The lawsuit was dismissed in August, and AT&T said it wouldn't appeal the dismissal in October. A TechRepublic investigation found that AT&T has talked a big game about its buildout in Louisville, but has dragged its feet in rolling out gigabit internet to customers and has signed up very few households. It's this kind of activity that has gotten AT&T's gigabit strategy labeled "fiber-to-the-press-release." It's unclear what Google Fiber's 2018 will look like. The company's map of Fiber cities doesn't yet list an upcoming city where Fiber will be heading next. Six potential cities—Portland, OR, San Jose, CA, Los Angeles, CA, Dallas, TX, Oklahoma City, OK, Tampa, FL, Jacksonville, FL, Phoenix, AZ—are listed as places the company is exploring. SEE: Louisville and the Future of the Smart City (a ZDNet/TechRepublic special report) William Hahn, an analyst at Gartner, said going to even one-third of those cities next year would be impressive for Google Fiber. However, he said he doesn't foresee a shift in the market in the next two years. The next big gamechanger? The rollout of 5G, which will give providers more wireless to potentially play with in cities and hard-to-reach rural areas. In five US cities in 2018, Verizon plans to roll out 5G fixed wireless, which will compete directly with fiber in speed and low latency. Antlitz said it's probable that Fiber will collaborate with incumbent ISPs to target unserved and underserved communities, including those in emerging markets and harder-to-reach rural spots. "I think they don't want to be an ISP," Antlitz said. "They're trying to prove a point." The point? That faster, 1 gigabit internet can be affordable—and that the existing ISPs just needed a push. "They got what they wanted," Antlitz added. Stay informed, click here to subscribe to the TechRepublic Next Big Thing newsletter. Subscribe Also see "
" The article What is hyperconverged infrastructure, and why should you care? was written on August 28 2017, in the domain of tech, which states: Spend more than an hour in a meeting with any major software company and you're bound to hear the buzzword "hyperconverged infrastructure," but what is it, and why should you care? Industry analyst Zeus Kerravala explained it for us in a question-and-answer session. We played the role of skeptic. SEE: Virtualization policy (Tech Pro Research) TechRepublic: We think we understand what hyperconverged infrastructure means, but how would you explain it? Zeus Kerravala: "It's kind of a weird term. There was already a converged infrastructure market [lacking the software aspect] when this technology came around. Hyperconverged platforms are turnkey products that include all the hardware and software one needs to run a contained little data center in a box. ... When you look at running data center infrastructure there's a lot of different choices for buyers. If you use Cisco networking, EMC storage, and Dell computing, which is a pretty standard thing, there's over 800 configurations. [In HCI] the vendor's done all the heavy lifting. They're not plug-and-play... it's data center technology, nothing's ever going to be plug-and-play. But customers have told me the deployment time for these is days vs. months if you're trying to cobble it all together yourself." SEE: The cloud v. data center decision (ZDNet special report) | Download the report as a PDF (TechRepublic) TechRepublic: Do you think most corporate sysadmins and CIOs understand this? Zeus Kerravala: "I'm not sure the CIO does. I think technology has been somewhat niche. It's been used primarily for virtual desktop deployments. Those are workloads that tend to be demanding... unified communications are a likely next thing. I don't really understand where the 'hyper' came from, to be honest with you." TechRepublic: Most good ideas in information technology are cyclical. How much of this is truly novel and how much is just a new name? Zeus Kerravala: "We used to have converged platforms a long time ago, and we called them mainframes. The reason the hyperconverged market exists is to simplify the deployment of all the stuff we need to run data centers." TechRepublic: Why is this happening now? Zeus Kerravala: "I talk to CIOs. More and more, CIOs are less concerned about the technical aspects of running stuff. They want stuff to work so they can run the business. There's a theme of digital transformation that's cutting across all businesses. If you talk to a CEO about running a business, it's about speed today. It's Darwinism." SEE: Digital transformation: A CXO's guide (ZDNet special report) | Download the report as a PDF (TechRepublic) TechRepublic: What are the risks of changing from traditional to hyperconverged infrastructure? Zeus Kerravala: "I haven't really talked to anyone who hasn't had a good experience [except] using the technology for the wrong workloads. If you're going to run hyperconverged infrastructure, the development has to be done on a product that's at least similar from a hardware perspective." TechRepublic: What about hardware upgrades? Zeus Kerravala: "Applications that have the most demanding hardware requirements, I'd probably keep those on a platform that I have a little bit of control over, and I can upgrade the processors when I need to. If you wouldn't run it on a virtual machine, then certainly don't run it on this." TechRepublic: Which companies are offering hyperconverged infrastructure the right way? Zeus Kerravala: "The market leader right now in terms of brand and share is still Nutanix. They've done a lot of work in software. The one to watch is Dell/EMC but for specific use cases [such as with VMware's vSphere]. If your hypervisor is Microsoft or Citrix, then I might look at a different platform... 8kpc is a startup. They've done a lot of work on the hardware optimization phase." TechRepublic: Which companies aren't doing so well at it? Zeus Kerravala: "I think HPE is bit of a confused company right now... Lenovo is another one that I've expected more of by now." TechRepublic: What is your advice for customers considering a hyperconverged infrastructure product? Zeus Kerravala: "There's a lot of products on the market and they all kind of pitch the same message. But the performance from box to box, from vendor to vendor, is going to be quite different depending on what you're running on it. Do your own testing. How does it work in a hybrid cloud situation? I'd also want to know from a roadmap perspective about flash storage, 100Gb Ethernet, and then NVMe." TechRepublic: What else should people know? Zeus Kerravala: "Try to have a good understanding of what it means to the operational team. Things may be easy to deploy initially but take a look at the ongoing management. That's really going to determine whether you get value out of these products or not." For more networking, storage, and enterprise hardware news, subscribe to our Data Center Trends newsletter. Subscribe Also see "
" The article How to choose the right wearable device for business vs. fitness was written on November 6 2017, in the domain of tech, which states: Building a slide deck, pitch, or presentation? Here are the big takeaways: Business professionals could gain from wearing a wearable device, but some have more fitness features than business features. Professionals should look for wearables with different ways to stay connected, basic activity tracking, and a fashion-forward look. Wearables, like smartwatches and fitness trackers, are popular with business professionals, and for good reason. The devices can collect data and provide insights, allowing wearers to track their fitness and productivity to reach their goals faster. But some devices may not work the best for business professionals. They may not have enough ways to stay connected in terms of communications, or they may be too focused on physical goals like meditation. And some may stand out too much for professionals in formal business attire to feel comfortable wearing them. SEE: Wearable Device Policy (Tech Pro Research) How to choose a wearable for business Connectivity is the first thing to consider when selecting a wearable for business purposes. Some options can connect to a smartphone, while others work outside of a cellular network. Some professionals need constant access to business communication, and selecting a wearable that works in tandem with a smartphone can provide that. Think about what you want to accomplish with your wearable. Do you want to just be able to see notifications, or do you need to be able to answer texts and emails as well? What about activity tracking? It can help you understand how you spend your hours to find ways to become more productive, or you can use it for fitness purposes as well. Apps and integrations can be helpful for business professionals, so check out what is available. Some, especially ones connected to a smartphone, have multiple options, while others have fewer choices. Integrations can streamline things between your wearable and other devices, potentially making you more efficient. Apps can offer new ways to boost productivity. Mindfulness features are also helpful, especially in high-stress jobs or industries. A sleep tracker can help understand if you're sleeping long or well enough. Finally, looks aren't everything, but some wearables can stand out when worn with business attire. More wearables are adopting the look of traditional watches, with leather bands and sleek faces. A wearable won't do much if you don't wear it because of its look. How to choose a wearable for fitness First, you should consider if you want one device to carry from work to the gym, or if you want separate options. Some popular devices, like the Apple Watch, can work for both environments due to the amount of features and connectivity options. Much like with business wearables, you need to consider what exactly the fitness tracker needs to do. Most will offer the same baseline metrics, but others offer more analytics. How much insight do you want into your workouts? Some only need simple step tracking, but someone training for a marathon may need more detail. In what physical environment are you going to use the device? Whatever the answer, the tracker should be ready. For example, if you're a swimmer, you obviously need a water-resistant device. Runners may want a device with a built-in GPS so they can track their runs. You should also consider the tracker's connected app, if it has one. What analysis and insights can you get on the app? Does it have features to track food and water intake? Like with business wearables, integrations may also be important, so review the offerings. Stay informed, click here to subscribe to the TechRepublic Tech News You Can Use newsletter. Subscribe Also see "
" The article How to sort and delete sets of Gmail messages: 4 steps was written on May 30 2018, in the domain of tech, which states: You might want to mass delete email from Gmail for many reasons: To remove non-work-related messages from an account, to achieve "inbox zero" as part of a personal productivity effort, or—more mundanely—to reduce the storage space used by attachments. Some people pursue #NoEmail—and start to treat email as an ephemeral communication channel instead of a permanent archive. Before you start to mass delete items from Gmail, I recommend that you export your current email data. To do this, use Google Takeout at https://takeout.google.com. Choose the "Select None" button, then scroll down the page to Mail. Move the slider to the right of Mail to "on." (You may export just some of your email: Select the down arrow to the left of the slider, then choose one—or more—Gmail labels to select items tagged with those labels to export.) Select Next at the bottom of the page, then choose the format, file sizes, and storage action for your export. Wait to start your deletions until you've either downloaded or verified that your exported email has been stored. (Note: If you use G Suite, an administrator has the ability to disable access to Takeout. If that's the case, talk with your administrator about backup before you begin.) After you backup, cycle through the following four steps to move sets of email to the Gmail trash. 1. Search I've found that typing search terms into the Gmail search box in the desktop Chrome browser to be the most efficient way to find and select sets of emails. And while Google gives you a long list of search operators, I suggest you start with the following: Email address . Enter to: or from: followed by an email address to find all of the email sent to or received from an address. . Enter to: or from: followed by an email address to find all of the email sent to or received from an address. Subject . Enter subject: followed by a word (or a phrase in quotes) to find all email that contains the word or phrase specified. . Enter subject: followed by a word (or a phrase in quotes) to find all email that contains the word or phrase specified. Date . While there are several time-search options, try before: or older_than: first. The first locates items prior to a specified data, while the latter locates item older than a certain number of days, months, or years (e.g., 3d, 1m, or 7y for 3 days, 1 month, or 7 years, respectively) from the current date. . While there are several time-search options, try before: or older_than: first. The first locates items prior to a specified data, while the latter locates item older than a certain number of days, months, or years (e.g., 3d, 1m, or 7y for 3 days, 1 month, or 7 years, respectively) from the current date. Size. Locates email larger than a specific size. For example, larger:20M finds items larger than 20Mb. Often a simple search may be all you need to locate a set of email you no longer need. For example, you might not need to keep receipts from some vendors (email address), accepted calendar invitations (subject), email older than 3 years (date), or large files (size) stored elsewhere. 2. Review and refine Review the search results to see if you wish to keep email found with a simple search. If no, move on to step no. 3. If you see emails that you wish to keep among the results, you'll need to refine your search. You can combine multiple search terms. For example, search for both an email address and a date: from:andy@pa311.com older_than:1y This would find items older than one year from the current day. Or, add a subject as well, to narrow the results further: from:andy@pa311.com older_than:1y subject:"Weekly meeting" You may use the - character to exclude a search term (or terms). For example: to:andy@pa311.com older_than:1y -subject:"Quarterly review" This would find items older than a year sent to a specific email address, but would exclude any emails with the subject "Quarterly review." If more than one screen of results is indicated, select the arrow in the upper right area to review additional screens of email search results. Refine and review the results until you're confident that all the email found by your search is email you wish to delete. 3. Select / Select All Select the box at the top of the column above your email search results to select all of the email displayed. If your search returns more email than is displayed on the current screen, you'll see a message above the list of email that gives you the option to "Select all conversations that match this search." Click the words to select all conversations that match your search terms. 4. Move to Trash Select the trashcan icon to delete the selected email. Repeat for various terms Repeat your search to find, select, and delete as many sets of email as you wish. When I help people get control of their email we often search for things such as: Old promotional emails, newsletters, and updates Email no longer needed from specific clients, vendors, or colleagues System status notices (e.g., update notifications and system down/up notifications) Outdated social media or account sign-in notifications Email related to prior jobs (including paid and volunteer roles) Tip: Use a label to exclude a set from a search Often, I find it helpful to label a set of email so that I can always exclude that set of email when I work through the email deletion process. For example, you might want to keep all email from a specific person (or several people). To do this, first create a Gmail label, such as "Never delete." Then, search for a colleague's email address. Select all email with that person, then select the label icon, choose the label you created (e.g., "Never delete"), then select "Apply" at the bottom of the column. Repeat this process for as many criteria as you wish Then, when you do searches, always exclude labels that match the selected set. For example: older_than:1y -label:"Never delete" This would return all emails older than a year, while excluding all emails labeled "Never delete." Optional: Delete Trash At this point, you're done. Gmail will remove items left in the trash after 30 days. If you really want things deleted now, you can always navigate to the trash, select all items (and select all items in the trash), then choose "Delete forever." × e-gmail-auto-delete-vault-search.jpg G Suite controls A G Suite administrator has at least two significant options available to manage mail, as well. First, an administrator can set a Gmail auto-delete policy (from admin.google.com, sign-in, then Apps > G Suite > Gmail > Advanced settings > Compliance: Email and chat auto-deletion) for messages to either be moved to trash or deleted after a specified number of days. The administrator also may specify that emails with a specific label (or labels) will not be auto-deleted. Second, a G Suite administrator can configure Google Vault, which gives the organization a sophisticated set of controls to preserve, search, and export email communications for legal and/or compliance purposes. (Vault is included with Business and Enterprise edition licenses.) Your thoughts? Do you maintain a pristine, close-to-zero Gmail inbox? Or do you archive everything forever? How often do you delete sets of messages from Gmail? Let me know in the comments — or on Twitter (@awolber). Subscribe now to our Google Weekly newsletter to stay informed of useful Google news and tips! Subscribe Also see "
" The article Google Cloud Speech API gets enterprise upgrade with new tools and 30 more languages was written on August 14 2017, in the domain of tech, which states: On Monday, Google announced new updates to its Cloud Speech API that could help make it a more effective tool for business users. According to a Google blog post, the API is getting a new feature called word-level timestamps, along with support for 30 new languages and three hour files. For those unfamiliar, the Google Cloud Speech API uses neural network models to allow developers to convert audio to text. It's powered by machine learning, and can return its results in real-time. Word-level timestamps, the post said, was the most requested feature for the API from developers. Essentially, this feature adds a timestamp for each word it identifies in a given transcription. "Word-level timestamps let users jump to the moment in the audio where the text was spoken, or display the relevant text while the audio is playing," the post said. SEE: How we learned to talk to computers, and how they learned to answer back (PDF download) One of the customers cited in the post, Happy Scribe, uses the word-level timestamps to lower the time it takes for them to proofread the transcriptions they offer their customers. Another firm, VoxImplant, uses it to better analyze recorded phone conversations between two parties. As part of a broader announcement around Google's voice input capabilities, the Cloud Speech API will now offer support for 30 additional languages, bringing the total number supported up to 119. The languages will first be offered to Cloud Speech API customers, but will eventually be supported on other Google products, like Gboard, as well. As noted by ZDnet's Stephanie Condon, the extended language support could help Google win over some customers in emerging markets. The full list of languages that work with the Cloud Speech API can be found here. Additionally, the post said, the Cloud Speech API will now support files that are longer than three hours in duration, an increase from the previous limit of 80 minutes. Files that are longer than three hours can be supported on a "case-by-case basis by applying for a quota extension through Cloud Support." The 3 big takeaways for TechRepublic readers Google Cloud Speech API now supports 119 languages and three hour long files, and offers a new word-level timestamp feature. Word-level timestamps were the no. 1 most-requested feature, allowing developers to jump to the moment where a certain word was said in an audio transcript. The features could help make the API more business-friendly, and the language support could win Google some business in emerging markets. Stay informed, click here to subscribe to the TechRepublic Google Weekly newsletter. Subscribe Also see "
" The article How Australia's backdoor proposal could threaten security for the rest of the world was written on August 14 2017, in the domain of tech, which states: Should world governments have backdoors to encrypted applications? Australian Prime Minister Malcolm Turnbull recently made a proposal to ensure that law enforcement can still gain access to information despite its' protection by encryption. Access Now's US Policy Manager Amie Stepanovich met with TechRepublic's Dan Patterson to explain why that may be a bad idea. "The problem with [this proposal] is that by allowing that law enforcement access, what you're doing is you're creating a security technology that fails sometimes," Stepanovich said. See: IT security and privacy: Concerns, initiatives, and predictions (Tech Pro Research) Encryption is the first defense against people trying to compromise your personal data. Since encryption isn't a failproof technology, building it in can lead to more data breaches and crime because it's facilitating a much less secure internet, she said. The policy requested by Turnbull affects the rest of the world because Australia's encryption policy could set a precedent for other countries. Stepanovich noted that it's important to understand that this policy would likely not stop criminals or terrorists from accessing secure communications technologies. The math used in encrypted applications, she said, is not subject to the whims of politicians, which means bad actors will still be able to cover their tracks. See: Encryption Policy (Tech Pro Research) In the past, other governments have pursued similar policies. For example, some require keys to be stored in third-party facilities and are easily stolen. Some countries restrict the strength of encryption or even prevent the use of end-to-end encryption. These tactics are easily circumvented, said Stepanovich. She reiterated the need for privacy as a tool that protects consumer privacy, communication between journalists and sources, whistleblowers, and corporate intellectual property. "We're at a time where we need to be facilitating research into and development of the strongest security tools possible," she said. "We need to be investing in security as much as we possibly can," said Stepanovich. Strengthen your organization's IT security defenses by keeping abreast of the latest cybersecurity news, solutions, and best practices. Subscribe Also see: "
" The article How one AI company is bringing medical care to millions of rural Chinese residents was written on December 11 2017, in the domain of tech, which states: China's population, which in 2016 had 793 million urban residents and 590 rural residents, is spread out over a land mass of 3.7 million square miles. By 2010, 93% of the rural population had healthcare coverage, but providing rural medicine and timely healthcare to rural regions persist. This is where analytics can make a difference. "We wanted to take analytics, artificial intelligence and deep learning technologies and use them to better understand different medical conditions, how to diagnose them, and how to treat them," said Kuan Chen, founder and CEO of Infervision, a Chinese artificial intelligence and deep learning company that specializes in medical image diagnosis. Analytics, artificial intelligence, and deep learning are put into play by analyzing medical images and reports on different pathological conditions, and then coming up with different models and sources of treatment and medical interventions based on common patterns that are assembled from studies of thousands of patients in China's urban hospitals. "These models use deep learning to 'learn' from the data and continuously improve their diagnostic capabilities," said Chen. The first disease that Chen targeted was lung cancer, with the software being able to locate hard-to-detect or hidden nodules in the lungs that could prove to be cancerous. Now the task at hand is providing a similar diagnostic and medical intervention tool for strokes, which can especially be useful in rural areas where qualified medical practitioners are scarce. SEE: IT leader's guide to deep learning (Tech Pro Research) "By studying the stroke condition of a patient, the analytics can determine what is the optimum time table for treatment, and how aggressively the stroke should be treated," said Chen. In short, the AI and analytics become a second pair of eyes for radiologists against which they can cross-check their own diagnoses. How important is this? "In many rural areas in China, there are no trained radiologists who can help stroke victims," said Chen. "And in other areas of the world, like the US, radiologists make an average of $375,000 a year, so they are very expensive." Chen says that the feedback he gets from hospitals is that younger radiologists and medical practitioners rely heavily on AI, while older practitioners prefer to use it as a second opinion that they cross-check against their own. "In a stroke, you want to respond to the condition as quickly as possible," said Chen. "It might take 30 to 35 seconds in a standard process to generate a report on the condition so treatment can be determined. With our tool, that time is cut to less than three seconds." The use of deep learning and expanded analytics also expand the spectrum of diagnosis, which can lead to better results. "In one non-stroke case that involved diagnosis and treatment of a bone fracture and a degenerated area of bone, the standard approach is to treat the affected area itself," said Chen. "With analytics and AI, a system can focus on different areas of the body that are far removed from where the problem is to see if these other areas could be affecting the condition. If it is a problem that is being generated far from the fracture itself, the analytics allow us to treat causes of the condition, and not just symptoms." SEE: How to implement AI and machine learning (free PDF) (ZDNet/TechRepublic special report) Here are some best practices hospitals and clinics can adopt as AI and deep learning tools evolve: Deploy the tool where help is needed most If there is an acute shortage of medical practitioners in a specific region, analytics and AI can help in situations like stroke intervention and treatment, and the chances for success for patients will improve. Use the tool for training Radiologists and medical practitioners must develop knowledge and experience before they can become expert diagnosticians. An analytics and deep learning tool can assist in the training process because users can compare their own findings against what the system finds in numerous scenarios. Learn to expect the unexpected You might think you are going to treat one condition and end up treating another. The bone fracture that Chen mentioned, where the system actually found the causal problem in a different area of the body, is a prime example. This is why medical practitioners should keep their minds open. Never forget that AI and deep learning tools are still developing Just because a system uses AI, deep learning and analytics doesn't mean that is it always right. Medical practitioners should use these systems as assistants and not as undisputed authorities, because there are some areas where a machine can't be a replacement for human thought and reasoning. For more about AI, big data, and the latest innovations, subscribe to our Next Big Thing newsletter. Subscribe Also see: "
" The article A new slim version of Windows 10 called "Lean" could be perfect for low-end PCs was written on October 24 2017, in the domain of tech, which states: Building a slide deck, pitch, or presentation? Here are the big takeaways: A new stripped-down Windows 10 build called "Lean" was discovered in the latest Insider preview of Windows 10. It lacks many Windows 10 features and has a 2GB smaller installation size. Microsoft hasn't said what Lean's purpose is, but it appears to be for lower-end machines or those that need to be locked down from user tampering. Microsoft's latest Windows Insider skip ahead build contains a new version of Windows 10 called Windows 10 Lean, which cuts the installation size by 2 GB. Discovered by Twitter user Lucan, Windows 10 Lean cuts out several Windows 10 features: desktop wallpaper is disabled by default, the Microsoft Management Console and registry editor are missing, drivers for CD and DVD drives can't be installed, Microsoft Edge doesn't allow downloads, and Microsoft Office is missing as well. At first glance it may seem that Windows 10 Lean is an alternative to Windows 10 S (which only allows app installation from the Windows Store), but Lucan quickly dismissed that by saying that those restrictions don't apply, as he was able to run applications normally locked to Windows 10 S users. What is Windows 10 Lean's purpose? The Twitter discussion growing up around Lucan's discovery of Windows 10 Lean is devoid of one important thing: an explanation as to its purpose. Mary Jo Foley from TechRepublic sister site ZDNet speculated that it was a version of Windows 10 S for home or enterprise, but Lucan said he doesn't think that's the case. Windows 10 S, he said, is more like a set of restrictions on top of a standard Windows 10 install, which Lean definitely isn't. Another Twitter commenter said it may be ideal for educational use, as schools often have older computers that need a smaller install. Add to that the heavy restrictions on what a user can do in Windows 10 Lean (no downloading, no Regedit, etc.) and you have a relatively resilient OS that has lower-end hardware requirements. SEE: Securing Windows policy (Tech Pro Research) WIndows 10 Lean could make a great OS for any systems that see a lot of user contact: Loaner machines, kiosks, sales floor demos, and other specific roles would be a great fit for Lean. Anyone who has ever managed computers that see a lot of public contact knows they have to be locked down, and Windows 10 Lean seems designed for that particular purpose. There are a lot of things users can't do in a base install of Lean, leaving it up to an administrator to pre-load an installation with certain software or settings that would be largely unalterable. Given the limits of WIndows 10 Lean it's likely that it's designed to save space, be a quick install, and be customized as an image prior to being installed. Lean images could be configured to suit specific roles, and users would be largely unable to damage them. We won't know what Windows 10 Lean is really designed for until Microsoft says so, but If you want to check it out now you can do so in Redstone 5 Insider preview build 17650, available now to Windows Insider members. Get a roundup of the biggest Microsoft news of the week in your inbox: Subscribe to our Microsoft Weekly newsletter. Subscribe Also see "
" The article Essential Phone PH-1 review: The many positives, plus a few drawbacks was written on December 11 2017, in the domain of tech, which states: Image: Essential Before I made the purchase of an Essential Phone, like everyone else, I scoured through review after review. My goal was to sift through the chaff and find something that would key me into understanding what was at the core of the PH-1. It didn't take long for a common thread to bubble up from the surface. That thread? A less-than stellar camera. The good news for me was that I don't consider a smartphone's primary function to be taking photos, so having the best camera available wasn't a top priority. With that out of the way, it seemed the PH-1 met all of my needs, and did so at a price point that was right on the money. And thus, I made the purchase, and set aside my OnePlus 3 to embark on a journey with the newest underdog. As many of you know, I'm a big fan of the underdog. I've been using Linux as my primary OS for decades, so I'm accustomed to watching a platform scrape and dig for attention and respect. However, after just two weeks of use, I'm convinced the PH-1 shouldn't be considered an underdog but a top dog, in a class by itself. That's not to say it's perfect because it's not, but no device is (and anyone who believes otherwise is kidding themselves). Now that I've had plenty of time to experience Essential's first foray into the smartphone market, I feel like I have plenty to say about the device. And, with that said, let's dive into the good and the bad. SEE: Mobile device computing policy (Tech Pro Research) The bad I thought I'd flip the script and start with the bad. Why? Because there isn't much in the way of bad to be found. In fact, I have to trick myself into thinking "Maybe photos are more important than I originally thought!" before I can really come up with something negative of note to say about the PH-1. It has been well documented that the Essential Phone's camera is lackluster. The software is a bit slow, and the low-light photos are far from great. The selfie camera also suffers from the software issues that hinder the main camera. However, after four updates (that's right, four updates since I received the device, more on that in a bit), I've watched the camera app improve exponentially. It's still not nearly as good as the Pixel 2 camera app for instance, but it's passible. For anyone that doesn't consider photos to be a priority, the camera app will suffice. The only other nit to pick is that the gorgeous case is the biggest fingerprint magnet I've ever seen. I'm constantly wiping the back down. Had this phone not been nearly as beautiful as it is, the fingerprints wouldn't concern me. But the PH-1 is one of the most elegant smartphones I have ever held in my hand, so my propensity is to keep it clean. Essential should be cleaning up in awards for hardware design—of that there is no doubt. Finally, there is no headphone jack. That's okay for two reasons: Bluetooth headphones have come a very long way, and Essential included the necessary dongle so users won't have to toss their standard headphones or other devices that might make use of that common interface. And that's it for the bad. The good There is almost too much to say here , so I'm going to boil it down to a few "essential" items. First and foremost: the design. As I said, it's gorgeous. But even the titanium sides and ceramic back take a seat to the display. No it's not the most cutting edge (Essential went with an LCD display, instead of the more popular, flagship level, OLED option), but the edge to edge is absolutely beautiful. Essential essentially proved that a bezel-less device is very much possible and their home screen launcher makes perfect use of the screen real estate (Figure A). Figure A The one downfall is that not every app found in the Google Play Store makes use of that full screen. To Essential's credit, so far I've only found one app that didn't—Discogs (Figure B). Figure B Beyond the hardware, there's the stock Android (shipping with Android 7.1.1). If you're looking for nothing but essential Android, the PH-1 delivers. Upon arrival the device included the bare minimum software. There was zero bloat. Couple that with the speedy Qualcomm Snapdragon 835 processor paired with 4GB of RAM and 128GB of internal storage, and that barebones Android runs as smoothly as any flagship device. Period. Apps install quickly, start instantly, and run smoothly. The PH-1 easily stands toe-to-toe with my wife's Samsung Galaxy S8. One very crucial aspect many users will appreciate is how quickly the PH-1 receives the Android Security Patch. Since initially turning on the device, my PH-1 has received four Android updates. Even though the device is running Android 7.1.1, it enjoys the most recent Security Patch (Figure C). Figure C The combination of beautiful and powerful hardware, and up-to-date barebones software make for an incredible experience. Who's the ideal PH-1 user? Let's make this easy: If you're tired of devices shipping with bloat—and who isn't—the PH-1 might be the ideal device for you. If you're constantly on-the-go, the titanium case is strong enough to withstand your brutal abuse. If you're a fan of the underdog, the PH-1 is the perfect smartphone for you. The ratio of price to performance will absolutely blow you away. No other smartphone, regardless of manufacturer, enjoys this level of form and function. Essential has every right to stand with the leaders in the industry. It's every bit as cool as the iPhone X and as flexible as any Android device—all without the price found with most flagship smartphones. If you like your devices to turn heads, the PH-1 is the perfect mix of brawn, brains, and beauty. The look of the PH-1 draws onlookers in, and the performance locks them in. The second you hold the PH-1 in your hand you'll know you've purchased a quality product. This is a flagship smartphone, there's no doubt. What more needs to be said? Bravo Essential, you've created something special. Automatically sign up for TechRepublic's Mobile Enterprise Newsletter for more news and tips. Subscribe Also see "
" The article Fatal pedestrian crash causes Uber to stop self-driving car tests was written on March 19 2018, in the domain of tech, which states: Building a slide deck, pitch, or presentation? Here are the big takeaways: An Uber vehicle in autonomous driving mode hit and killed a woman in Tempe, AZ, in the first known pedestrian fatality involving the self-driving technology. Uber has temporarily stopped its self-driving operations in Tempe and all other cities where it has been testing its vehicles, including Phoenix, Pittsburgh, San Francisco, and Toronto. An Uber car in autonomous driving mode struck and killed a pedestrian in Tempe, AZ, on Monday, in the first known pedestrian fatality involving the self-driving technology, as reported by our sister site CNET. Uber has since temporarily stopped its self-driving operations in Tempe and all other cities where it has been testing its vehicles, including Phoenix, Pittsburgh, San Francisco, and Toronto. The vehicle was in autonomous mode at the time of the accident, with a vehicle operator behind the wheel, according to a statement from the Tempe police. More about autonomous vehicles Special report: Tech and the future of transportation (free PDF) This ebook, based on the latest special feature from ZDNet and TechRepublic, looks at emerging autonomous transport technologies and how they will affect society and the future of business. Read more The female victim walking outside of the crosswalk crossed Curry Road in Tempe, and was struck by the Uber car, the police said in the statement. She was transported to a local hospital, where she passed away from her injuries. SEE: IT leader's guide to the future of autonomous vehicles (Tech Pro Research) The investigation is still active, and Uber is assisting, the police said in the statement. "Our hearts go out to the victim's family," an Uber spokeswoman said in a statement. "We are fully cooperating with local authorities in their investigation of this incident." "Some incredibly sad news out of Arizona," Uber CEO Dara Khosrowshahi tweeted on Monday. "We're thinking of the victim's family as we work with local law enforcement to understand what happened." This is the first known pedestrian fatality from a self-driving car. However, autonomous vehicles have been involved in a number of other accidents, including backing into a delivery truck in Las Vegas and getting hit by another car whose human driver did not yield in Tempe, both in 2017. In May 2016, a Tesla driver was killed in an accident while the car was operating in its semi-autonomous Autopilot mode. A US Department of Transportation investigation did not identify any defects in design or performance of the Autopilot system. The pedestrian fatality could have immediate implications for the rollout of self-driving taxis and delivery vehicles, which are predicted by many to be the first widespread applications of self-driving technology. It could mean that progress is slowed until more regulations are in place. The accident could also impact discussions around how autonomous vehicles will change auto insurance. KPMG estimates that the technology will lead to an 80% drop in accident frequency by 2040, and that providers will need to shift from covering the car itself to the software of the car. It remains to be seen how coverage of accidents like this may change. Stay informed, click here to subscribe to the TechRepublic Tech News You Can Use newsletter. Subscribe Also see "
" The article IBM Watson Data Kits speed enterprise AI development was written on March 19 2018, in the domain of tech, which states: Building a slide deck, pitch, or presentation? Here are the big takeaways: IBM launched IBM Watson Data Kits, designed to speed the development of AI applications in the enterprise. Enterprise AI apps created with IBM Watson Data Kits have the potential to aid in faster, more informed decision making for business leaders. On Tuesday, IBM launched IBM Watson Data Kits, designed to speed the development of artificial intelligence (AI) applications in the enterprise. These apps have the potential to aid in faster, more informed decision making for business leaders, according to a press release. "Watson Data Kits will provide companies across industries with pre-enriched, machine readable, industry-specific data that can enable them to scale AI across their business," the release said. In Q2, Watson Data Kits will become available for the travel, transportation, and food industries, with kits for travel points of interest and food menus. Kits tailored for additional industries are also expected in the coming months, the release noted. SEE: The Power of IoT and Big Data (Tech Pro Research) More than half of data scientists said they spend most of their time on janitorial tasks, such as cleaning and organizing data, labeling data, and collecting data sets, according to a CrowdFlower report, making it difficult for business leaders to implement AI technology at scale. Streamlining and accelerating the development process for AI engineers and data scientists will help companies more quickly gain insights from their data, and drive greater business value, according to IBM. "Big data is fueling the cognitive era. However, businesses need the right data to truly drive innovation," Kristen Lauria, general manager of Watson media and content, said in the release. "IBM Watson Data Kits can help bridge that gap by providing the machine-readable, pre-trained data companies require to accelerate AI development and lead to a faster time to insight and value. Data is hard, but Watson can make it easier for stakeholders at every level, from CIOs to data scientists." The Watson Data Kit for travel points of interest will offer airlines, hotels, and online travel agencies with more than 300,000 points of interest in 100 categories, to create better experiences for travelers, according to the release. Companies in the travel and transportation industry can use the kits to build AI-powered web and mobile apps to help users find information on points of interest in a given area. For example, the release noted, a hospitality company could use the data kit to train AI powering its chatbot to recommend personalized destinations based on a customer's individual preferences. Meanwhile, the Watson Data Kit for food menus includes 700,000 menus in 21,000 US cities, according to the release. This will offer AI developers content for apps that can help users filter menu items, types of food, locations, and price points. The kit allows developers to build in side-by-side comparisons of menu choices and prices. For example, the release noted, the kit could be integrated into a car's navigation system to provide voice-activated directions to the closest bakery that sells gluten-free muffins. Stay up to date on all the latest big data news. Click here to subscribe to the TechRepublic Big Data Essentials newsletter. Subscribe Also see "

Dataset Card for "subset_wikitext_format_date_only_train"

More Information needed

Downloads last month
41