text
stringlengths 174
13.2k
| Comprehension
class label 2
classes | Configuration
class label 2
classes | Crashes
class label 2
classes | Implementation
class label 2
classes | Performance issue
class label 2
classes | Results interpretation
class label 2
classes |
---|---|---|---|---|---|---|
Virtual dressing systems
Which softwares can provide me a fairly good library and API to create a small virtual dressing application. I am looking into the capabilities of Pinta, Filters, Imagemagick etc. | 11
| 00
| 00
| 00
| 00
| 00
|
How to create mobile apps that can recognize location/orientation of 2D imges from the CAM
I am interested in finding out how mobile applications can recognize the location and orientation of a pre-determined 2D image/logo/glyph via live video from the camera.
For iPhone or Android, what libraries are used, and more importantly, are there any examples out there that demonstrate this?
Thanks! | 11
| 00
| 00
| 00
| 00
| 00
|
ARKit vs. ARCore vs. Vuforia vs. D'Fusion Mobile vs. Layar SDK [closed]
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I would be interested to know, where are the advantages and disadvantages of each vision-based mobile Augmented Reality Frameworks? For what should be decide in which case? Would you choose Vuforia in any case, because it is free and without branding? What are important features are missing in one of the frameworks? Are there limits on the free version of Metaio SDK (except branding and Metaio splash-screen)?
I think these are the most important Frameworks to support iOS and Android. I know that metaio support movie textures and MD2 (Animation) Export and Vuforia not (at least not in the basic state).
Edit:
Here is a 3 hour session reviewed the best Mobile AR SDKs in the market and how to get started with them: Tutorial: Top 10 Augmented Reality SDKs for Developers
You should also check out ARLab from Augmented Reality Lab S.L. This has different AR-SDKs for AR Browser, Image Matching, Image Tracking, 3D engine, Virtual buttons. But this is not free.
Wikitude's ARchitect SDK has an Vuforia-Extension and Blackberry 10 Support. This could also be very interesting.
The Layar SDK is now available for iOS and Android with 3D, animation, AR Video and QR-Code Scanner
DARAM also appears a good SDK for Android, iOS, Windows 8 and Mac.
ARPA has a Unity-Plugin and a Google Glass SDK.
Here is a good comparison chart for Augmented Reality SDKs and frameworks
Apple has acquired Metaio. Metaio's future uncertain. (May 28, 2015)
Magic Leap Announces Its Augmented Reality Developer SDK for Unreal and Unity
Vuforia now has paid licensing and ability to demo apps without a watermark on their no-cost Starter plan - it now appears only during the first app launch in a particular day. This is to support developers who want to do demos to clients without showing the watermark. (May 6, 2015), Qualcomm sell its Vuforia business to PTC (Oct 12, 2015)
ARKit iOS 11 introduces ARKit, a new framework that allows you to easily create unparalleled augmented reality experiences for iPhone and iPad. By blending digital objects and information with the environment around you, ARKit takes apps beyond the screen, freeing them to interact with the real world in entirely new ways. (Juni 2017)
8th Wall XR is the world's first AR platform that works on all commonly available iOS and Android phones and integrates seamlessly with ARKit and ARCore. (August 2017)
ARCore is Google’s answer to Apple’s ARKit. It is designed to bring augmented reality capabilities to all Android phones in a relatively simple way. It’s also replacing the Project Tango, which required specialized hardware to run. | 11
| 00
| 00
| 00
| 00
| 00
|
Android realtime location tracking for 3D model augmented reality
I'm trying to implement a location-based AR solution similar to the GPS/Location tracking feature of Metaio Unifeye SDK (they stop support ARMv6 devices since their 2.5 version, but I must target those devices)
I want my users to be able to view a 10-meter high humanoid monster 3D Model in a 0.5 km radius area.
It must be a non-optical solution in which:
- Location (from GPS or Network provider) and compass will be used as reference for camera manipulation.
- Sensors may help estimating.
Is there any existing implementation (library or tutorial) for realtime location tracking sufficient for my requirement? | 11
| 00
| 00
| 00
| 00
| 00
|
Hyperlink,email address detection using camera [closed]
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
i want to build an app that recognizes hyperlinks and email addresses when i point out the camera at a paper or board which consists of lot of information along with hyperlinks and email addresses..anyway has anybody built such an app before or is it feasible? should i use augmented reality for this? what say? | 11
| 00
| 00
| 00
| 00
| 00
|
Is it possible to render XNA game over camera preview in windows phone 8
I have a demo working where I can render some text over a camera preview in windows phone 8, I now want to extend the app to render xna game content in an augmented reality style over camera preview.
Is this even possible with XNA? | 11
| 00
| 00
| 00
| 00
| 00
|
Can someone please suggest AR SDK for Windows Phone App Development other than SLART and GART? [closed]
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
Can someone please suggest AR SDK for Windows Phone App Development (with image utilities) other than SLART and GART? | 11
| 00
| 00
| 00
| 00
| 00
|
camera overlay change with bearing and elevation
Folks,
I am trying to get a utility as shown in the picture below. Basically the camera display window covers part of the device's screen and a list of points that are connected by a curve or straight line are presented over the camera view as an overlay. I understand this can be drawn using quartz but this is less than half of my problem.
The real issue is that the overlay should present different points as the bearing and elevation changes.
For example:
if the bearing has to change +5 degrees and elevation +2 degrees, then PT1 will be next to the right edge of the camera view, PT2 will also move to the right and PT3 will be visible.
Another movement that changes the bearing +10 degrees would make PT1 not visible, PT2 at the right, PT3 middle and PT4 on the left edge of the camera view.
My questions after the picture:
Is it possible to have a view that is substantially larger than the size of the camera view (as shown below) and use some methods (I need to research these) to move the view when bearing/elevation changes? Is it recommended performance wise?
Is quartz the way to go here? What else do I need (other then of course AVFoundation for the camera and corelocation/motion)? Since my application is only iOS 7 I can use any new methods/APIs exclusive to iOS 7.
Aside from raywendelrich's tutorial on the augmented reality game, are there any tutorials that you know of that could help me with this endeavor? | 11
| 00
| 00
| 00
| 00
| 00
|
Augmented reality: Rendering function
My question builds up on this thread: Computer Vision / Augmented Reality: how to overlay 3D objects over vision? and on its first answer. I want to build an application that projects on real time the position of a fictional 3D object into a video feed, but the first step I have to take is: How can I do this over a single image?
What I am going for at the moment is having some kind of function that given a picture, its 6D pose (position + orientation), a 3D object (on fbx, 3ds, or something easily convertable to or from others), and its own position and orientation, returns me the projection of the 3D object over the image. Once I have that, I should be able to apply it over every frame of the video feed (how will I get the 6D information of the camera is a problem I'll deal with later)
My problem is that I am unsure where to find such a function, if it even exists. It should be offered like some kind of script or API so an external program can make use of it. Where should I look? Unity? Some kind of OpenCL functionality? So far my reading has not given me any conclusive answers, and as I am a novice in the topic, I'm sure a steep learning curve is ahead and I'd rather put my efforts on the right direction. Thank you | 11
| 00
| 00
| 00
| 00
| 00
|
Can I make a universal app using HTML that runs on Hololens?
I believe these statement are true:
1) All Universal Apps Work As Holograms
2) Universal Apps can be built using HTML/JS
Does this mean I can build a holographic universal app using web technologies? For example a holographic visualizations dashboard in D3.js? | 11
| 00
| 00
| 00
| 00
| 00
|
AR ODG application for conference calls
I'm researching AR frameworks in order to select the best option for developing conference call/ meeting application for ODG glasses.
I got only a few directions for selecting a framework:
Performance of video streaming (capturing and encoding) must be watched closely to avoid overheating and excessive power consumption,
Should support extended tracking and
Video capturing should not be frame by frame.
I have no experience with AR field in general, and I would really appreciate if you can let me know your opinion or to give me some guidance on how to choose the best-fitted framework. | 11
| 00
| 00
| 00
| 00
| 00
|
nm: shared library symbol appearing twice or once
I have a shared library (libARWrapper.so) which includes the following two entries, shown using nm (nm -D --defined-only libARWrapper.so)
00075854 T Java_org_artoolkit_ar_base_NativeInterface_arwAcceptVideoImage
00074d54 T Java_org_artoolkit_ar_base_NativeInterface_arwCapture
...
00072d54 T arwCapture
I know that T means "The symbol is in the text (code) section."
What is the distinction between arwCapture appearing twice, and arwAcceptVideoImage, which appears only once.
I am able to call arwCapture from a C# DllImport, but not arwAcceptVideoImage.
There are also many other functions appearing the same as arwCapture, all under org.artoolkit.ar.bash.NativeInterface, which I can use OK.
Other (Java) code is able to call all functions through the NDK. | 11
| 00
| 00
| 00
| 00
| 00
|
Can ARKit detect specific surfaces as planes?
Using iOS 11 and iOS 12 and ARKit, we are currently able to detect planes on horizontal surfaces, and we may also visualize that plane on the surface.
I am wondering if we can declare, through some sort of image file, specific surfaces in which we want to detect planes? (possibly ignoring all other planes that ARKit detects from other surfaces)
If that is not possible, could we then perhaps capture the plane detected (via an image), to which we could then process through a CoreML model which identifies that specific surface? | 11
| 00
| 00
| 00
| 00
| 00
|
Save ARCore video for further processing?
Is there a video standard that is able to save output from ARCore for further processing? Like encoding the world information beyond the actual video content. | 11
| 00
| 00
| 00
| 00
| 00
|
Take photo in ARKit Camera like snapchat
Are they any ways in swift to take high quality pictures with the sceneview camera session like snapchat does?
Or at least without the scnnode objects in the picture?
Because I don't want to init a second camera frame to take pictures.
I want it to be integrated like snapchat camera.
Best regards,
moe | 11
| 00
| 00
| 00
| 00
| 00
|
Which format file for 3d model SceneKit/ARKit better to use
I read several tutorials how to place 3d objects in SceneKit/ARKit applications and all of them uses .scn format files for the objects.
But I found there is no any issues if I use original .dae format and do not convert it to .scn format.
I don't really see any difference between .dae and .scn formats.
Actually result seems to me the same but can you explain what the difference between them and what I should use in what cases?
Thank you! | 11
| 00
| 00
| 00
| 00
| 00
|
What is the difference between Session.getAllTrackables and Frame.getUpdatedTrackables?
Do both return all trackables known for now?
Why do we need both?
When should call which one?
Same question is for Session.getAllAnchors and Frame.getUpdatedAnchors. | 11
| 00
| 00
| 00
| 00
| 00
|
ARCore Tracking Marker
I am developing an Android app that will simply track a real life marker and draw some graphics on top of that. I have just started looking into frameworks... Is ARCore suitable for this or Vuforia ? | 11
| 00
| 00
| 00
| 00
| 00
|
Are there any app to stream mac desktop to iPhone as VR
There are apps in AppStore for VR player, however, is there any mac app that can stream mac desktop to iPhone VR player app?
I had google already, the closed related result is https://www.reddit.com/r/iOSVR/comments/5htzha/trying_to_stream_my_macbook_display_to_my_iphone/, which looks like a negative answer. | 11
| 00
| 00
| 00
| 00
| 00
|
Are there any limitations in Vuforia compared to ARCore and ARKit?
I am a beginner in the field of augmented reality, working on applications that create plans of buildings (floor plan, room plan, etc with accurate measurements) using a smartphone. So I am researching about the best AR SDK which can be used for this. There are not many articles pitting Vuforia against ARCore and ARKit.
Please suggest the best SDK to use, pros and cons of each. | 11
| 00
| 00
| 00
| 00
| 00
|
Develop GoogleVr Cardboard Unity 3D
Hey Guys can somebody suggest me a good source how to build a GoogleVr Cardboard App for Android. Most of the Tutorials that I saw on Youtube are old Tutorials and the old SDK GoogleVr (old) dosn't work any more
Thanks in advice!
ZZAim | 11
| 00
| 00
| 00
| 00
| 00
|
SceneKit - What is the lowest row of elements in SCNMatrix4 for?
In SCNMatrix4 struct, the last m44 element is used for Homogeneous Coordinates.
I'd like to know what are m41, m42 and m43 elements in SCNMatrix4? And what exactly I can use these three elements for?
init(m11: Float, m12: Float, m13: Float, m14: Float,
m21: Float, m22: Float, m23: Float, m24: Float,
m31: Float, m32: Float, m33: Float, m34: Float,
m41: Float, m42: Float, m43: Float, m44: Float) | 11
| 00
| 00
| 00
| 00
| 00
|
Can anyone explain the technicality of zspace AR-VR product and how it works?
I just came to know about a AR-VR company for educational interactive content. I know about Augmented reality apps which can be developed using Unity framework and know Virtual reality too.
But can anyone try to explain how they are doing it or any idea or direction will be helpful?
Can we use existing Google cardboard and some tool to interact with the 3D object? Like this - DIY hand tracking VR controller.
Thanks in advance and let me know if you guys have more questions. | 11
| 00
| 00
| 00
| 00
| 00
|
Can i use LibGDX to program a VR game with the Oculus Rift + touch?
Can i use LibGDX to program a VR game with the Oculus Rift + touch? I just got an oculus rift + touch controllers and would like to code a game. I am very familiar with LibGDX and would like to stay using that as the game engine. Is this possible and if so, how? | 11
| 00
| 00
| 00
| 00
| 00
|
Distance Measurement Using Vuforia
I want to scan an image target and start tracking camera movements. now if I go outside of the room lets say 200 meters away. the app should show me that I am 200 meters away from the scanned image.
Simply, is this possible in Vuforia? | 11
| 00
| 00
| 00
| 00
| 00
|
Virtual dressing systems
Which softwares can provide me a fairly good library and API to create a small virtual dressing application. I am looking into the capabilities of Pinta, Filters, Imagemagick etc. | 11
| 00
| 00
| 00
| 00
| 00
|
How to create mobile apps that can recognize location/orientation of 2D imges from the CAM
I am interested in finding out how mobile applications can recognize the location and orientation of a pre-determined 2D image/logo/glyph via live video from the camera.
For iPhone or Android, what libraries are used, and more importantly, are there any examples out there that demonstrate this?
Thanks! | 11
| 00
| 00
| 00
| 00
| 00
|
ARKit vs. ARCore vs. Vuforia vs. D'Fusion Mobile vs. Layar SDK [closed]
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I would be interested to know, where are the advantages and disadvantages of each vision-based mobile Augmented Reality Frameworks? For what should be decide in which case? Would you choose Vuforia in any case, because it is free and without branding? What are important features are missing in one of the frameworks? Are there limits on the free version of Metaio SDK (except branding and Metaio splash-screen)?
I think these are the most important Frameworks to support iOS and Android. I know that metaio support movie textures and MD2 (Animation) Export and Vuforia not (at least not in the basic state).
Edit:
Here is a 3 hour session reviewed the best Mobile AR SDKs in the market and how to get started with them: Tutorial: Top 10 Augmented Reality SDKs for Developers
You should also check out ARLab from Augmented Reality Lab S.L. This has different AR-SDKs for AR Browser, Image Matching, Image Tracking, 3D engine, Virtual buttons. But this is not free.
Wikitude's ARchitect SDK has an Vuforia-Extension and Blackberry 10 Support. This could also be very interesting.
The Layar SDK is now available for iOS and Android with 3D, animation, AR Video and QR-Code Scanner
DARAM also appears a good SDK for Android, iOS, Windows 8 and Mac.
ARPA has a Unity-Plugin and a Google Glass SDK.
Here is a good comparison chart for Augmented Reality SDKs and frameworks
Apple has acquired Metaio. Metaio's future uncertain. (May 28, 2015)
Magic Leap Announces Its Augmented Reality Developer SDK for Unreal and Unity
Vuforia now has paid licensing and ability to demo apps without a watermark on their no-cost Starter plan - it now appears only during the first app launch in a particular day. This is to support developers who want to do demos to clients without showing the watermark. (May 6, 2015), Qualcomm sell its Vuforia business to PTC (Oct 12, 2015)
ARKit iOS 11 introduces ARKit, a new framework that allows you to easily create unparalleled augmented reality experiences for iPhone and iPad. By blending digital objects and information with the environment around you, ARKit takes apps beyond the screen, freeing them to interact with the real world in entirely new ways. (Juni 2017)
8th Wall XR is the world's first AR platform that works on all commonly available iOS and Android phones and integrates seamlessly with ARKit and ARCore. (August 2017)
ARCore is Google’s answer to Apple’s ARKit. It is designed to bring augmented reality capabilities to all Android phones in a relatively simple way. It’s also replacing the Project Tango, which required specialized hardware to run. | 11
| 00
| 00
| 00
| 00
| 00
|
Can someone please suggest AR SDK for Windows Phone App Development other than SLART and GART? [closed]
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
Can someone please suggest AR SDK for Windows Phone App Development (with image utilities) other than SLART and GART? | 11
| 00
| 00
| 00
| 00
| 00
|
camera overlay change with bearing and elevation
Folks,
I am trying to get a utility as shown in the picture below. Basically the camera display window covers part of the device's screen and a list of points that are connected by a curve or straight line are presented over the camera view as an overlay. I understand this can be drawn using quartz but this is less than half of my problem.
The real issue is that the overlay should present different points as the bearing and elevation changes.
For example:
if the bearing has to change +5 degrees and elevation +2 degrees, then PT1 will be next to the right edge of the camera view, PT2 will also move to the right and PT3 will be visible.
Another movement that changes the bearing +10 degrees would make PT1 not visible, PT2 at the right, PT3 middle and PT4 on the left edge of the camera view.
My questions after the picture:
Is it possible to have a view that is substantially larger than the size of the camera view (as shown below) and use some methods (I need to research these) to move the view when bearing/elevation changes? Is it recommended performance wise?
Is quartz the way to go here? What else do I need (other then of course AVFoundation for the camera and corelocation/motion)? Since my application is only iOS 7 I can use any new methods/APIs exclusive to iOS 7.
Aside from raywendelrich's tutorial on the augmented reality game, are there any tutorials that you know of that could help me with this endeavor? | 11
| 00
| 00
| 00
| 00
| 00
|
Augmented reality: Rendering function
My question builds up on this thread: Computer Vision / Augmented Reality: how to overlay 3D objects over vision? and on its first answer. I want to build an application that projects on real time the position of a fictional 3D object into a video feed, but the first step I have to take is: How can I do this over a single image?
What I am going for at the moment is having some kind of function that given a picture, its 6D pose (position + orientation), a 3D object (on fbx, 3ds, or something easily convertable to or from others), and its own position and orientation, returns me the projection of the 3D object over the image. Once I have that, I should be able to apply it over every frame of the video feed (how will I get the 6D information of the camera is a problem I'll deal with later)
My problem is that I am unsure where to find such a function, if it even exists. It should be offered like some kind of script or API so an external program can make use of it. Where should I look? Unity? Some kind of OpenCL functionality? So far my reading has not given me any conclusive answers, and as I am a novice in the topic, I'm sure a steep learning curve is ahead and I'd rather put my efforts on the right direction. Thank you | 11
| 00
| 00
| 00
| 00
| 00
|
nm: shared library symbol appearing twice or once
I have a shared library (libARWrapper.so) which includes the following two entries, shown using nm (nm -D --defined-only libARWrapper.so)
00075854 T Java_org_artoolkit_ar_base_NativeInterface_arwAcceptVideoImage
00074d54 T Java_org_artoolkit_ar_base_NativeInterface_arwCapture
...
00072d54 T arwCapture
I know that T means "The symbol is in the text (code) section."
What is the distinction between arwCapture appearing twice, and arwAcceptVideoImage, which appears only once.
I am able to call arwCapture from a C# DllImport, but not arwAcceptVideoImage.
There are also many other functions appearing the same as arwCapture, all under org.artoolkit.ar.bash.NativeInterface, which I can use OK.
Other (Java) code is able to call all functions through the NDK. | 11
| 00
| 00
| 00
| 00
| 00
|
Can ARKit detect specific surfaces as planes?
Using iOS 11 and iOS 12 and ARKit, we are currently able to detect planes on horizontal surfaces, and we may also visualize that plane on the surface.
I am wondering if we can declare, through some sort of image file, specific surfaces in which we want to detect planes? (possibly ignoring all other planes that ARKit detects from other surfaces)
If that is not possible, could we then perhaps capture the plane detected (via an image), to which we could then process through a CoreML model which identifies that specific surface? | 11
| 00
| 00
| 00
| 00
| 00
|
Take photo in ARKit Camera like snapchat
Are they any ways in swift to take high quality pictures with the sceneview camera session like snapchat does?
Or at least without the scnnode objects in the picture?
Because I don't want to init a second camera frame to take pictures.
I want it to be integrated like snapchat camera.
Best regards,
moe | 11
| 00
| 00
| 00
| 00
| 00
|
Which format file for 3d model SceneKit/ARKit better to use
I read several tutorials how to place 3d objects in SceneKit/ARKit applications and all of them uses .scn format files for the objects.
But I found there is no any issues if I use original .dae format and do not convert it to .scn format.
I don't really see any difference between .dae and .scn formats.
Actually result seems to me the same but can you explain what the difference between them and what I should use in what cases?
Thank you! | 11
| 00
| 00
| 00
| 00
| 00
|
Are there any limitations in Vuforia compared to ARCore and ARKit?
I am a beginner in the field of augmented reality, working on applications that create plans of buildings (floor plan, room plan, etc with accurate measurements) using a smartphone. So I am researching about the best AR SDK which can be used for this. There are not many articles pitting Vuforia against ARCore and ARKit.
Please suggest the best SDK to use, pros and cons of each. | 11
| 00
| 00
| 00
| 00
| 00
|
Develop GoogleVr Cardboard Unity 3D
Hey Guys can somebody suggest me a good source how to build a GoogleVr Cardboard App for Android. Most of the Tutorials that I saw on Youtube are old Tutorials and the old SDK GoogleVr (old) dosn't work any more
Thanks in advice!
ZZAim | 11
| 00
| 00
| 00
| 00
| 00
|
SceneKit - What is the lowest row of elements in SCNMatrix4 for?
In SCNMatrix4 struct, the last m44 element is used for Homogeneous Coordinates.
I'd like to know what are m41, m42 and m43 elements in SCNMatrix4? And what exactly I can use these three elements for?
init(m11: Float, m12: Float, m13: Float, m14: Float,
m21: Float, m22: Float, m23: Float, m24: Float,
m31: Float, m32: Float, m33: Float, m34: Float,
m41: Float, m42: Float, m43: Float, m44: Float) | 11
| 00
| 00
| 00
| 00
| 00
|
Can anyone explain the technicality of zspace AR-VR product and how it works?
I just came to know about a AR-VR company for educational interactive content. I know about Augmented reality apps which can be developed using Unity framework and know Virtual reality too.
But can anyone try to explain how they are doing it or any idea or direction will be helpful?
Can we use existing Google cardboard and some tool to interact with the 3D object? Like this - DIY hand tracking VR controller.
Thanks in advance and let me know if you guys have more questions. | 11
| 00
| 00
| 00
| 00
| 00
|
Distance Measurement Using Vuforia
I want to scan an image target and start tracking camera movements. now if I go outside of the room lets say 200 meters away. the app should show me that I am 200 meters away from the scanned image.
Simply, is this possible in Vuforia? | 11
| 00
| 00
| 00
| 00
| 00
|
How do I submit a Vuforia application for Facebook review as a simulator build?
I am creating an iOS app that uses Unity3D, the Vuforia AR SDK, and Prime31 Social Networking Plugins.
In order to publish a screenshot to a user's Facebook timeline, my app needs the "publish_actions" permission. To use this I need to submit the app for review by Facebook, in the form of an Xcode Simulator Build.
However, Vuforia does not support simulator builds:
https://developer.vuforia.com/resources/dev-guide/step-4-compiling-and-running-vuforia-sample-app
"Note: Vuforia applications do not build or work with the Simulator."
Has anyone managed to get an iOS Vuforia-based app reviewed by Facebook?
Thanks. | 00
| 11
| 00
| 00
| 00
| 00
|
DK2 Head tracking not working "HMD powered off, check HDMI connection" on Windows
Part 1 - Description of the problem
I have the DK2 and I am working on a VR project. This project uses FirefoxNightly. I've downloaded it and installed the WebVR Enabler Add-On
Got this from http://mozvr.com/downloads/
I have also downloaded and installed the latest SDK and Runtime for Windows from https://developer.oculus.com/downloads/
I am also getting this on the Oculus Configuration Utility (while the oculus is plugged in):
However, I have gone on another computer with windows.. installed everything just like on this windows computer and it clearly shows the Oculus Rift connected properly but the head tracking still not working.
EDIT: I just tried connecting the oculus rift to this "second" pc ( dell laptop ) and now it doesn't even recognize the oculus rift. Still no head tracking.
EDIT 2: I tried installing everything on a third PC without success. I'm getting "service unavailable" on the Oculus Configuration Utility
My display mode is set as shown in the image.
Part 2 - Questions
What am I doing wrong? Is there a step I forgot to do? The weird thing is, I have the same project running on Mac without having any problems. Yes, on windows I can see the screen through the oculus rift but head detection is just not present.
Part 3 - list of possible fixes that did not work
This reddit post talks about the firewall issue however I tried the oculus rift with the firewall disactivated without success.
This reddit post talks about a possible fix by reinstalling everything and updating certain drivers.. however I have followed this fix step by step without success.
This oculus forum post talks about the issue and one person proposes a fix that worked for him/her. I followed the fix without success.
Part 4 - System info
If you require specific translations let me know. It is in French.
Part 5 - List of things I have tried that have been thought of
I have reinstalled everything. SDK (is not even needed in fact), runtime, firefoxnightly, webvr add-on multiple times
I have rebooted my computer multiple times
I have tried the different Rift Display Mode
Basic demos from mozvr.com and other webvr based projects work fine but head tracking does not work.
My Oculus is not broken (maybe for windows), it works fine for the Mac.
I've tried using different HDMI cables and Different minUSB-USB cables without success.
Part 6 - Quotes from the forum
First post
This sounds like the same issue a lot of us are having with the 0.5
and 0.6 versions. It's not something wrong with the cables, but with
the Runtime itself. Direct-mode works flawlessly and in Extended mode
the rift still displays a picture, altho without any tracking etc from
the runtime. Hoping it'll be fixed in the next update. | 00
| 11
| 00
| 00
| 00
| 00
|
Camera view in leap motion
I have a camera view in visualiser in Leap Motion.
How can I remove this night camera view and see just detection of my hands as a joint number of colored stubs in tutorials? | 00
| 11
| 00
| 00
| 00
| 00
|
How to enable WebVR on Google Chrome?
I am trying to create a WebVR scene. For this task, I want to enable WebVR on Google Chrome. My OS is Windows 8.
I open flags using chrome://flags/. WebVR is not there. How can I enable it? | 00
| 11
| 00
| 00
| 00
| 00
|
Linux configuration ARToolKit, make error
I would like to compile ARToolKit source code on Linux, download the source code, and in accordance with the ARToolKit document, configuration GLUT, OpenGL, libjpeg other libraries.
Go to the ARToolKit directory and type ./Configer
Configer information image.
Enter the make command,The error occurs.
What are the causes of these errors? How can I solve? thanks. | 00
| 11
| 00
| 00
| 00
| 00
|
Unity hololens using holotoolkit FOV always 16.97196
I am using unity with holotoolkit for developing app for hololens. The issue is the Field of View (FOV) of the main camera is always 16.97196 no matter what values is input. I even added a script to deliberately set the FOV value to 60 but it resets to 16.97196. Can the FOV value be set the user requirements. | 00
| 11
| 00
| 00
| 00
| 00
|
Testing ARKit without iPhone6s or newer
I am before decision to download Xcode9. I want to play with new framework - ARKit. I know that to run app with ARKit I need a device with A9 chip or newer. Unfortunately I have an older one. My question is to people who already downloaded the new Xcode. There is a possibility to run ARKit app in my case? Any simulator for that or something else? Any ideas or will I have to buy new device? | 00
| 11
| 00
| 00
| 00
| 00
|
Facebook AR Studio - Importing Animations
I'm trying to import a 3d object in .fbx format into the Facebook AR Studio. The object was created and animated in Blender. The AR Studio does not support 'Baked' animations, but I'm wondering if it's possible to somehow import the animation into the studio, as opposed to scripting the entire animation, due to the complexity of the object (number of bones and motions).
I'm new to AR and 3d modeling so any input or direction is appreciated. | 00
| 11
| 00
| 00
| 00
| 00
|
ARCore Tablet Support
I would like to know whether ARCore supports any tablets natively as of the latest update.
I have found the list of supported phones here, however no mentions of tablets seem to be made.
Other similar questions on StackOverflow seem to have had no conclusive answer. | 00
| 11
| 00
| 00
| 00
| 00
|
How can I use my webcam inside an Azure windows server virtual machine?
I have a Windows server 2016 vm on Azure and I am trying to do some work in Augmented Reality using Vuforia and Unity. An essential part of this is being able to use my webcam but once I am inside the VM it doesnt recognise my integrated webcam. I tried to connect a webcam through USB as well but this doesn't work either. Is this an impossible task...getting a server instance to recognise webcams on my laptop or is it actually possible? Any help would be appreciated. | 00
| 11
| 00
| 00
| 00
| 00
|
How to convert OBJ with MTL file to USDZ format
So I have an OBJ 3D model with its associated MTL file. The MTL file contains all the textures. However, when I convert the file to the USDZ format, the textures are not attributed to the file. This is the code I use.
xcrun usdz_converter /Users/SaiKambampati/Downloads/Models/object.obj /Users/SaiKambampati/Downloads/Models/object.usdz
The USDZ file is created but the attributes and textures are not applied. Is there any way to include the MTL file when converting the OBJ model to USDZ model? | 00
| 11
| 00
| 00
| 00
| 00
|
Location based AR with Three.js (+ React) - camera configuration?
I want to augment the image of a stationary webcam with location based markers. This is to be added to an existing React app that uses three.js (through react-three-fiber) in other parts already, so these technologies are to be reused.
While it is quite eays to calculate the position of the markers (locations known) relative to the camera (location known), I'm struggling with the configuration of the camera in order to get a good visual match between "real" object and AR marker.
I have created a codesandbox with an artificial example that illustrates the challenge.
Here's my attempt at configuring the camera:
const camera = {
position: [0, 1.5, 0],
fov: 85,
near: 0.005,
far: 1000
};
const bearing = 109; // degrees
<Canvas camera={camera}>
<Scene bearing={bearing}/>
</Canvas>
Further down in the scene component I’m rotating the camera according to the bearing of the webcam like so:
...
const rotation = { x: 0, y: bearing * -1, z: 0 };
camera.rotation.x = (rotation.x * Math.PI) / 180;
camera.rotation.y = (rotation.y * Math.PI) / 180;
camera.rotation.z = (rotation.z * Math.PI) / 180;
...
Any tips/thoughts on how to get that camera configured for a good match of three.js boxes and real life objects? | 00
| 11
| 00
| 00
| 00
| 00
|
ARcore compatible device list
I have a Samsung sm-p205 tablet, which is not in the ARcore supported device list
My question is if it is possible to add this tablet so I can install ARcore, if not possible why. Below the specs
Thank you
DISPLAY Type IPS LCD
Size 8.0 inches, 185.6 cm2 (~75.2% screen-to-body ratio)
Resolution 1200 x 1920 pixels, 16:10 ratio (~283 ppi density)
PLATFORM Android 11, One UI Core 3.1
Chipset Exynos 7904 (14 nm)
CPU Octa-core (2x1.8 GHz Cortex-A73 & 6x1.6 GHz Cortex-A53)
GPU Mali-G71 MP2
MEMORY Card slot microSDXC (dedicated slot)
Internal 32GB 3GB RAM
eMMC 5.1
MAIN CAMERA Single 8 MP, f/2.0, AF
Features LED flash, panorama
Video 1080p@30fps
SELFIE CAMERA Single 5 MP, f/2.2
Video 1080p@30fps | 00
| 11
| 00
| 00
| 00
| 00
|
How do I submit a Vuforia application for Facebook review as a simulator build?
I am creating an iOS app that uses Unity3D, the Vuforia AR SDK, and Prime31 Social Networking Plugins.
In order to publish a screenshot to a user's Facebook timeline, my app needs the "publish_actions" permission. To use this I need to submit the app for review by Facebook, in the form of an Xcode Simulator Build.
However, Vuforia does not support simulator builds:
https://developer.vuforia.com/resources/dev-guide/step-4-compiling-and-running-vuforia-sample-app
"Note: Vuforia applications do not build or work with the Simulator."
Has anyone managed to get an iOS Vuforia-based app reviewed by Facebook?
Thanks. | 00
| 11
| 00
| 00
| 00
| 00
|
DK2 Head tracking not working "HMD powered off, check HDMI connection" on Windows
Part 1 - Description of the problem
I have the DK2 and I am working on a VR project. This project uses FirefoxNightly. I've downloaded it and installed the WebVR Enabler Add-On
Got this from http://mozvr.com/downloads/
I have also downloaded and installed the latest SDK and Runtime for Windows from https://developer.oculus.com/downloads/
I am also getting this on the Oculus Configuration Utility (while the oculus is plugged in):
However, I have gone on another computer with windows.. installed everything just like on this windows computer and it clearly shows the Oculus Rift connected properly but the head tracking still not working.
EDIT: I just tried connecting the oculus rift to this "second" pc ( dell laptop ) and now it doesn't even recognize the oculus rift. Still no head tracking.
EDIT 2: I tried installing everything on a third PC without success. I'm getting "service unavailable" on the Oculus Configuration Utility
My display mode is set as shown in the image.
Part 2 - Questions
What am I doing wrong? Is there a step I forgot to do? The weird thing is, I have the same project running on Mac without having any problems. Yes, on windows I can see the screen through the oculus rift but head detection is just not present.
Part 3 - list of possible fixes that did not work
This reddit post talks about the firewall issue however I tried the oculus rift with the firewall disactivated without success.
This reddit post talks about a possible fix by reinstalling everything and updating certain drivers.. however I have followed this fix step by step without success.
This oculus forum post talks about the issue and one person proposes a fix that worked for him/her. I followed the fix without success.
Part 4 - System info
If you require specific translations let me know. It is in French.
Part 5 - List of things I have tried that have been thought of
I have reinstalled everything. SDK (is not even needed in fact), runtime, firefoxnightly, webvr add-on multiple times
I have rebooted my computer multiple times
I have tried the different Rift Display Mode
Basic demos from mozvr.com and other webvr based projects work fine but head tracking does not work.
My Oculus is not broken (maybe for windows), it works fine for the Mac.
I've tried using different HDMI cables and Different minUSB-USB cables without success.
Part 6 - Quotes from the forum
First post
This sounds like the same issue a lot of us are having with the 0.5
and 0.6 versions. It's not something wrong with the cables, but with
the Runtime itself. Direct-mode works flawlessly and in Extended mode
the rift still displays a picture, altho without any tracking etc from
the runtime. Hoping it'll be fixed in the next update. | 00
| 11
| 00
| 00
| 00
| 00
|
Camera view in leap motion
I have a camera view in visualiser in Leap Motion.
How can I remove this night camera view and see just detection of my hands as a joint number of colored stubs in tutorials? | 00
| 11
| 00
| 00
| 00
| 00
|
How to enable WebVR on Google Chrome?
I am trying to create a WebVR scene. For this task, I want to enable WebVR on Google Chrome. My OS is Windows 8.
I open flags using chrome://flags/. WebVR is not there. How can I enable it? | 00
| 11
| 00
| 00
| 00
| 00
|
Linux configuration ARToolKit, make error
I would like to compile ARToolKit source code on Linux, download the source code, and in accordance with the ARToolKit document, configuration GLUT, OpenGL, libjpeg other libraries.
Go to the ARToolKit directory and type ./Configer
Configer information image.
Enter the make command,The error occurs.
What are the causes of these errors? How can I solve? thanks. | 00
| 11
| 00
| 00
| 00
| 00
|
Unity hololens using holotoolkit FOV always 16.97196
I am using unity with holotoolkit for developing app for hololens. The issue is the Field of View (FOV) of the main camera is always 16.97196 no matter what values is input. I even added a script to deliberately set the FOV value to 60 but it resets to 16.97196. Can the FOV value be set the user requirements. | 00
| 11
| 00
| 00
| 00
| 00
|
Testing ARKit without iPhone6s or newer
I am before decision to download Xcode9. I want to play with new framework - ARKit. I know that to run app with ARKit I need a device with A9 chip or newer. Unfortunately I have an older one. My question is to people who already downloaded the new Xcode. There is a possibility to run ARKit app in my case? Any simulator for that or something else? Any ideas or will I have to buy new device? | 00
| 11
| 00
| 00
| 00
| 00
|
Facebook AR Studio - Importing Animations
I'm trying to import a 3d object in .fbx format into the Facebook AR Studio. The object was created and animated in Blender. The AR Studio does not support 'Baked' animations, but I'm wondering if it's possible to somehow import the animation into the studio, as opposed to scripting the entire animation, due to the complexity of the object (number of bones and motions).
I'm new to AR and 3d modeling so any input or direction is appreciated. | 00
| 11
| 00
| 00
| 00
| 00
|
ARCore Tablet Support
I would like to know whether ARCore supports any tablets natively as of the latest update.
I have found the list of supported phones here, however no mentions of tablets seem to be made.
Other similar questions on StackOverflow seem to have had no conclusive answer. | 00
| 11
| 00
| 00
| 00
| 00
|
How can I use my webcam inside an Azure windows server virtual machine?
I have a Windows server 2016 vm on Azure and I am trying to do some work in Augmented Reality using Vuforia and Unity. An essential part of this is being able to use my webcam but once I am inside the VM it doesnt recognise my integrated webcam. I tried to connect a webcam through USB as well but this doesn't work either. Is this an impossible task...getting a server instance to recognise webcams on my laptop or is it actually possible? Any help would be appreciated. | 00
| 11
| 00
| 00
| 00
| 00
|
How to convert OBJ with MTL file to USDZ format
So I have an OBJ 3D model with its associated MTL file. The MTL file contains all the textures. However, when I convert the file to the USDZ format, the textures are not attributed to the file. This is the code I use.
xcrun usdz_converter /Users/SaiKambampati/Downloads/Models/object.obj /Users/SaiKambampati/Downloads/Models/object.usdz
The USDZ file is created but the attributes and textures are not applied. Is there any way to include the MTL file when converting the OBJ model to USDZ model? | 00
| 11
| 00
| 00
| 00
| 00
|
Location based AR with Three.js (+ React) - camera configuration?
I want to augment the image of a stationary webcam with location based markers. This is to be added to an existing React app that uses three.js (through react-three-fiber) in other parts already, so these technologies are to be reused.
While it is quite eays to calculate the position of the markers (locations known) relative to the camera (location known), I'm struggling with the configuration of the camera in order to get a good visual match between "real" object and AR marker.
I have created a codesandbox with an artificial example that illustrates the challenge.
Here's my attempt at configuring the camera:
const camera = {
position: [0, 1.5, 0],
fov: 85,
near: 0.005,
far: 1000
};
const bearing = 109; // degrees
<Canvas camera={camera}>
<Scene bearing={bearing}/>
</Canvas>
Further down in the scene component I’m rotating the camera according to the bearing of the webcam like so:
...
const rotation = { x: 0, y: bearing * -1, z: 0 };
camera.rotation.x = (rotation.x * Math.PI) / 180;
camera.rotation.y = (rotation.y * Math.PI) / 180;
camera.rotation.z = (rotation.z * Math.PI) / 180;
...
Any tips/thoughts on how to get that camera configured for a good match of three.js boxes and real life objects? | 00
| 11
| 00
| 00
| 00
| 00
|
ARcore compatible device list
I have a Samsung sm-p205 tablet, which is not in the ARcore supported device list
My question is if it is possible to add this tablet so I can install ARcore, if not possible why. Below the specs
Thank you
DISPLAY Type IPS LCD
Size 8.0 inches, 185.6 cm2 (~75.2% screen-to-body ratio)
Resolution 1200 x 1920 pixels, 16:10 ratio (~283 ppi density)
PLATFORM Android 11, One UI Core 3.1
Chipset Exynos 7904 (14 nm)
CPU Octa-core (2x1.8 GHz Cortex-A73 & 6x1.6 GHz Cortex-A53)
GPU Mali-G71 MP2
MEMORY Card slot microSDXC (dedicated slot)
Internal 32GB 3GB RAM
eMMC 5.1
MAIN CAMERA Single 8 MP, f/2.0, AF
Features LED flash, panorama
Video 1080p@30fps
SELFIE CAMERA Single 5 MP, f/2.2
Video 1080p@30fps | 00
| 11
| 00
| 00
| 00
| 00
|
How do I submit a Vuforia application for Facebook review as a simulator build?
I am creating an iOS app that uses Unity3D, the Vuforia AR SDK, and Prime31 Social Networking Plugins.
In order to publish a screenshot to a user's Facebook timeline, my app needs the "publish_actions" permission. To use this I need to submit the app for review by Facebook, in the form of an Xcode Simulator Build.
However, Vuforia does not support simulator builds:
https://developer.vuforia.com/resources/dev-guide/step-4-compiling-and-running-vuforia-sample-app
"Note: Vuforia applications do not build or work with the Simulator."
Has anyone managed to get an iOS Vuforia-based app reviewed by Facebook?
Thanks. | 00
| 11
| 00
| 00
| 00
| 00
|
DK2 Head tracking not working "HMD powered off, check HDMI connection" on Windows
Part 1 - Description of the problem
I have the DK2 and I am working on a VR project. This project uses FirefoxNightly. I've downloaded it and installed the WebVR Enabler Add-On
Got this from http://mozvr.com/downloads/
I have also downloaded and installed the latest SDK and Runtime for Windows from https://developer.oculus.com/downloads/
I am also getting this on the Oculus Configuration Utility (while the oculus is plugged in):
However, I have gone on another computer with windows.. installed everything just like on this windows computer and it clearly shows the Oculus Rift connected properly but the head tracking still not working.
EDIT: I just tried connecting the oculus rift to this "second" pc ( dell laptop ) and now it doesn't even recognize the oculus rift. Still no head tracking.
EDIT 2: I tried installing everything on a third PC without success. I'm getting "service unavailable" on the Oculus Configuration Utility
My display mode is set as shown in the image.
Part 2 - Questions
What am I doing wrong? Is there a step I forgot to do? The weird thing is, I have the same project running on Mac without having any problems. Yes, on windows I can see the screen through the oculus rift but head detection is just not present.
Part 3 - list of possible fixes that did not work
This reddit post talks about the firewall issue however I tried the oculus rift with the firewall disactivated without success.
This reddit post talks about a possible fix by reinstalling everything and updating certain drivers.. however I have followed this fix step by step without success.
This oculus forum post talks about the issue and one person proposes a fix that worked for him/her. I followed the fix without success.
Part 4 - System info
If you require specific translations let me know. It is in French.
Part 5 - List of things I have tried that have been thought of
I have reinstalled everything. SDK (is not even needed in fact), runtime, firefoxnightly, webvr add-on multiple times
I have rebooted my computer multiple times
I have tried the different Rift Display Mode
Basic demos from mozvr.com and other webvr based projects work fine but head tracking does not work.
My Oculus is not broken (maybe for windows), it works fine for the Mac.
I've tried using different HDMI cables and Different minUSB-USB cables without success.
Part 6 - Quotes from the forum
First post
This sounds like the same issue a lot of us are having with the 0.5
and 0.6 versions. It's not something wrong with the cables, but with
the Runtime itself. Direct-mode works flawlessly and in Extended mode
the rift still displays a picture, altho without any tracking etc from
the runtime. Hoping it'll be fixed in the next update. | 00
| 11
| 00
| 00
| 00
| 00
|
Camera view in leap motion
I have a camera view in visualiser in Leap Motion.
How can I remove this night camera view and see just detection of my hands as a joint number of colored stubs in tutorials? | 00
| 11
| 00
| 00
| 00
| 00
|
How to enable WebVR on Google Chrome?
I am trying to create a WebVR scene. For this task, I want to enable WebVR on Google Chrome. My OS is Windows 8.
I open flags using chrome://flags/. WebVR is not there. How can I enable it? | 00
| 11
| 00
| 00
| 00
| 00
|
Linux configuration ARToolKit, make error
I would like to compile ARToolKit source code on Linux, download the source code, and in accordance with the ARToolKit document, configuration GLUT, OpenGL, libjpeg other libraries.
Go to the ARToolKit directory and type ./Configer
Configer information image.
Enter the make command,The error occurs.
What are the causes of these errors? How can I solve? thanks. | 00
| 11
| 00
| 00
| 00
| 00
|
Unity hololens using holotoolkit FOV always 16.97196
I am using unity with holotoolkit for developing app for hololens. The issue is the Field of View (FOV) of the main camera is always 16.97196 no matter what values is input. I even added a script to deliberately set the FOV value to 60 but it resets to 16.97196. Can the FOV value be set the user requirements. | 00
| 11
| 00
| 00
| 00
| 00
|
Testing ARKit without iPhone6s or newer
I am before decision to download Xcode9. I want to play with new framework - ARKit. I know that to run app with ARKit I need a device with A9 chip or newer. Unfortunately I have an older one. My question is to people who already downloaded the new Xcode. There is a possibility to run ARKit app in my case? Any simulator for that or something else? Any ideas or will I have to buy new device? | 00
| 11
| 00
| 00
| 00
| 00
|
Facebook AR Studio - Importing Animations
I'm trying to import a 3d object in .fbx format into the Facebook AR Studio. The object was created and animated in Blender. The AR Studio does not support 'Baked' animations, but I'm wondering if it's possible to somehow import the animation into the studio, as opposed to scripting the entire animation, due to the complexity of the object (number of bones and motions).
I'm new to AR and 3d modeling so any input or direction is appreciated. | 00
| 11
| 00
| 00
| 00
| 00
|
ARCore Tablet Support
I would like to know whether ARCore supports any tablets natively as of the latest update.
I have found the list of supported phones here, however no mentions of tablets seem to be made.
Other similar questions on StackOverflow seem to have had no conclusive answer. | 00
| 11
| 00
| 00
| 00
| 00
|
How can I use my webcam inside an Azure windows server virtual machine?
I have a Windows server 2016 vm on Azure and I am trying to do some work in Augmented Reality using Vuforia and Unity. An essential part of this is being able to use my webcam but once I am inside the VM it doesnt recognise my integrated webcam. I tried to connect a webcam through USB as well but this doesn't work either. Is this an impossible task...getting a server instance to recognise webcams on my laptop or is it actually possible? Any help would be appreciated. | 00
| 11
| 00
| 00
| 00
| 00
|
How to convert OBJ with MTL file to USDZ format
So I have an OBJ 3D model with its associated MTL file. The MTL file contains all the textures. However, when I convert the file to the USDZ format, the textures are not attributed to the file. This is the code I use.
xcrun usdz_converter /Users/SaiKambampati/Downloads/Models/object.obj /Users/SaiKambampati/Downloads/Models/object.usdz
The USDZ file is created but the attributes and textures are not applied. Is there any way to include the MTL file when converting the OBJ model to USDZ model? | 00
| 11
| 00
| 00
| 00
| 00
|
Location based AR with Three.js (+ React) - camera configuration?
I want to augment the image of a stationary webcam with location based markers. This is to be added to an existing React app that uses three.js (through react-three-fiber) in other parts already, so these technologies are to be reused.
While it is quite eays to calculate the position of the markers (locations known) relative to the camera (location known), I'm struggling with the configuration of the camera in order to get a good visual match between "real" object and AR marker.
I have created a codesandbox with an artificial example that illustrates the challenge.
Here's my attempt at configuring the camera:
const camera = {
position: [0, 1.5, 0],
fov: 85,
near: 0.005,
far: 1000
};
const bearing = 109; // degrees
<Canvas camera={camera}>
<Scene bearing={bearing}/>
</Canvas>
Further down in the scene component I’m rotating the camera according to the bearing of the webcam like so:
...
const rotation = { x: 0, y: bearing * -1, z: 0 };
camera.rotation.x = (rotation.x * Math.PI) / 180;
camera.rotation.y = (rotation.y * Math.PI) / 180;
camera.rotation.z = (rotation.z * Math.PI) / 180;
...
Any tips/thoughts on how to get that camera configured for a good match of three.js boxes and real life objects? | 00
| 11
| 00
| 00
| 00
| 00
|
ARcore compatible device list
I have a Samsung sm-p205 tablet, which is not in the ARcore supported device list
My question is if it is possible to add this tablet so I can install ARcore, if not possible why. Below the specs
Thank you
DISPLAY Type IPS LCD
Size 8.0 inches, 185.6 cm2 (~75.2% screen-to-body ratio)
Resolution 1200 x 1920 pixels, 16:10 ratio (~283 ppi density)
PLATFORM Android 11, One UI Core 3.1
Chipset Exynos 7904 (14 nm)
CPU Octa-core (2x1.8 GHz Cortex-A73 & 6x1.6 GHz Cortex-A53)
GPU Mali-G71 MP2
MEMORY Card slot microSDXC (dedicated slot)
Internal 32GB 3GB RAM
eMMC 5.1
MAIN CAMERA Single 8 MP, f/2.0, AF
Features LED flash, panorama
Video 1080p@30fps
SELFIE CAMERA Single 5 MP, f/2.2
Video 1080p@30fps | 00
| 11
| 00
| 00
| 00
| 00
|
How to enable WebVR on Google Chrome?
I am trying to create a WebVR scene. For this task, I want to enable WebVR on Google Chrome. My OS is Windows 8.
I open flags using chrome://flags/. WebVR is not there. How can I enable it? | 00
| 11
| 00
| 00
| 00
| 00
|
Unknown type name 'namespace' in Xcode 4.2
I am compiling QCAR SDK, but it prompts an error after I added more frameworks to the project.
// Matrices.h
//
#ifndef _QCAR_MATRIX_H_
#define _QCAR_MATRIX_H_
namespace QCAR
{
/// Matrix with 3 rows and 4 columns of float items
struct Matrix34F {
float data[3*4]; ///< Array of matrix items
};
/// Matrix with 4 rows and 4 columns of float items
struct Matrix44F {
float data[4*4]; ///< Array of matrix items
};
} // namespace QCAR
#endif //_QCAR_MATRIX_H_
In the line namespace QCAR, it said Unknown type name 'namespace'.
What should I do?
UPDATE: Here is the build transcript
In file included from ../../build/include/QCAR/Tool.h:18:
In file included from /Users/Raptor.Kwok/Documents/xCodeProjects/qcar-ios-1-0-0/samples/ImageTargets/ImageTargets/EAGLView.h:14:
In file included from /Users/Raptor.Kwok/Documents/xCodeProjects/qcar-ios-1-0-0/samples/ImageTargets/ImageTargets/ImageTargetsAppDelegate.h:9:
In file included from /Users/Raptor.Kwok/Documents/xCodeProjects/qcar-ios-1-0-0/samples/ImageTargets/CouponBook.m:12:
../../build/include/QCAR/Matrices.h:16:1: error: unknown type name 'namespace' [1]
namespace QCAR
^
../../build/include/QCAR/Matrices.h:16:15: error: expected ';' after top level declarator [1]
namespace QCAR
^
;
fix-it:"../../build/include/QCAR/Matrices.h":{16:15-16:15}:";"
In file included from /Users/Raptor.Kwok/Documents/xCodeProjects/qcar-ios-1-0-0/samples/ImageTargets/ImageTargets/ImageTargetsAppDelegate.h:9:
In file included from /Users/Raptor.Kwok/Documents/xCodeProjects/qcar-ios-1-0-0/samples/ImageTargets/CouponBook.m:12:
/Users/Raptor.Kwok/Documents/xCodeProjects/qcar-ios-1-0-0/samples/ImageTargets/ImageTargets/EAGLView.h:52:5: error: type name requires a specifier or qualifier [1]
QCAR::Matrix44F projectionMatrix;
^
/Users/Raptor.Kwok/Documents/xCodeProjects/qcar-ios-1-0-0/samples/ImageTargets/ImageTargets/EAGLView.h:52:10: error: expected expression [1]
QCAR::Matrix44F projectionMatrix;
^
/Users/Raptor.Kwok/Documents/xCodeProjects/qcar-ios-1-0-0/samples/ImageTargets/ImageTargets/EAGLView.h:52:5:{52:5-52:9}: warning: type specifier missing, defaults to 'int' [-Wimplicit-int,3]
QCAR::Matrix44F projectionMatrix;
^~~~
1 warning and 4 errors generated. | 00
| 00
| 11
| 00
| 00
| 00
|
Augmented Reality App force closing
I created a simple app based on the Augmented Reality Concept.
This is the following java code.
package com.kddi.satch.tutorialactivity;
import android.app.Activity;
import android.app.AlertDialog;
import android.app.Dialog;
import android.content.DialogInterface;
import android.content.pm.ApplicationInfo;
import android.content.pm.PackageManager;
import android.content.pm.PackageManager.NameNotFoundException;
import android.os.Bundle;
import android.os.Handler;
import android.view.KeyEvent;
import android.view.View;
import android.widget.FrameLayout;
import com.kddi.satch.ARViewer;
import com.kddi.satch.LoadScenarioStatus;
public abstract class TutorialActivity_simple extends Activity
{
protected abstract String getSampleScenarioName();
protected abstract String getSampleLogTag();
protected boolean _isInitializedCorrectly;
protected ARViewer _kddiComponent;
protected FrameLayout _frameLayout;
private static final int DIALOG_EXIT = 0;
private void resetMembers()
{
_isInitializedCorrectly = false;
_frameLayout = null;
_kddiComponent = null;
}
public void initComponent()
{
_isInitializedCorrectly = false;
_kddiComponent = new ARViewer(this);
// This FrameLayout must be empty (but initialized) when you pass it to the kddiComponent.initialize() method.
_frameLayout = new FrameLayout(this);
_kddiComponent.initialize(_frameLayout);
_isInitializedCorrectly = true;
}
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
// Add null to AR Viewer Library compornent's reference.
resetMembers();
}
@Override
public void onRestart() {
super.onRestart();
}
@Override
public void onStart() {
super.onStart();
// Create AR Viewer Library compornent.
initComponent();
postInitComponent();
initContentView();
if (_isInitializedCorrectly) {
// Do authorize and madia is loaded.
// You must call loadScenario() method.
loadScenario();
}
}
@Override
public void onResume(){
super.onResume();
if (_isInitializedCorrectly) {
// GL context is recreated and media is reloaded.
_kddiComponent.onResume();
reservePlayScenario();
}
}
@Override
public void onPause() {
// When the activity is paused the GL context is destroyed, so all media is unloaded.
if (_isInitializedCorrectly) {
cancelReservePlayScenario();
if (_kddiComponent.checkLoadScenarioStatus() == LoadScenarioStatus.COMPLETE) {
_kddiComponent.pauseScenario();
}
_kddiComponent.onPause();
}
super.onPause();
}
@Override
public void onStop() {
releaseContentView();
// Destroy AR Viewer Library Objects.
if (_isInitializedCorrectly)
{
_kddiComponent.terminate();
_kddiComponent = null;
_frameLayout = null;
}
super.onStop();
}
@Override
public void onDestroy() {
// Destroy AR Viewer Library Objects.
if (_isInitializedCorrectly)
{
_frameLayout = null;
_kddiComponent = null;
}
super.onDestroy();
resetMembers(); // forced clean
}
@Override
public boolean onKeyDown(int keyCode, KeyEvent msg)
{
switch(keyCode){
case android.view.KeyEvent.KEYCODE_BACK :
showDialog( DIALOG_EXIT );
return true;
}
return false;
}
public void postInitComponent()
{
// override this if you need to do some special handling on the component after standard initialization
if (_isInitializedCorrectly) {
_kddiComponent.activateAutoFocusOnDownEvent(true);
}
}
public void initContentView()
{
// override this if you need to do some special handling on the component after standard initialization
if (_isInitializedCorrectly) {
// you'll probably use some other UI object as the content view that itself will embed the component's frame layout -- here you can change all this
// by default, the frame layout containing DFusion will be the activity content view
setContentView(_frameLayout);
}
}
public void releaseContentView()
{
// override this if you need to do some special handling on the component after standard initialization
if (_isInitializedCorrectly) {
// do here the release of the your UI instances (if customized)
}
}
public void loadScenario()
{
ApplicationInfo appInfo = null;
PackageManager packMgmr = getApplicationContext().getPackageManager();
try {
appInfo = packMgmr.getApplicationInfo(getPackageName(), 0);
} catch (NameNotFoundException e) {
e.printStackTrace();
throw new RuntimeException("Unable to locate assets, aborting...");
}
String dpdfile = appInfo.sourceDir + getSampleScenarioName();
_kddiComponent.loadScenario(dpdfile);
}
// Set polling rate for loading the media.
private final int REPEAT_INTERVAL = 100;
private Handler handler = new Handler();
private Runnable runnable = null;
private void reservePlayScenario()
{
if (runnable == null)
{
runnable = new Runnable()
{
@Override
public void run()
{
LoadScenarioStatus status = _kddiComponent.checkLoadScenarioStatus();
if (status == LoadScenarioStatus.CANCEL)
{
// cancel(appli suspend)
}
else if (status == LoadScenarioStatus.COMPLETE)
{
// Ready to play scenario
_frameLayout.setVisibility(View.VISIBLE);
_kddiComponent.playScenario();
}
else if (
status == LoadScenarioStatus.ERROR_NETWORK_UNUSABLE ||
// faild to load a media becase of no network connection.
status == LoadScenarioStatus.ERROR_NETWORK ||
// faild to load a media becase of network error.
status == LoadScenarioStatus.ERROR_SOFTWAREKEY ||
// faild to load a media becase software key has not be found on server.
status == LoadScenarioStatus.ERROR_CONTENT_STOPPED ||
// faild to load a media becase content has stopped.
status == LoadScenarioStatus.ERROR_SERVER ||
// faild to load a media becase of server error.
status == LoadScenarioStatus.ERROR_ETC
// faild to load a media becase of another error.
)
{
// error
}
else
{
handler.postDelayed(this, REPEAT_INTERVAL);
}
}
};
handler.postDelayed(runnable, REPEAT_INTERVAL);
}
}
private void cancelReservePlayScenario()
{
if (handler != null && runnable != null)
{
handler.removeCallbacks(runnable);
runnable = null;
}
}
protected Dialog onCreateDialog(int id) {
Dialog dialog;
switch(id) {
case DIALOG_EXIT:
{
AlertDialog.Builder builder = new AlertDialog.Builder(this);
builder.setMessage(
"Really want to quit the sample?"
)
.setCancelable(true)
.setPositiveButton("Yes", new DialogInterface.OnClickListener() {
public void onClick(DialogInterface dialog, int id) {
finish();
}})
.setNegativeButton("No", new DialogInterface.OnClickListener() {
public void onClick(DialogInterface dialog, int id) {
dialog.cancel();
}});
dialog = builder.create();
break;
}
default:
dialog = null;
}
return dialog;
}
}
//-- end of file --
The manifest.xml file of the app looks like this
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
android:versionCode="1"
android:versionName="1.0" package="com.kddi.satch.tutorial">
<application android:icon="@drawable/ic_launcher" android:label="@string/app_name"
android:theme="@android:style/Theme.NoTitleBar.Fullscreen" android:debuggable="false">
<!-- configChanges setting -->
<activity android:name=".Tutorial"
android:label="@string/app_name"
android:screenOrientation="landscape"
android:configChanges="orientation|keyboard|keyboardHidden"
>
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
<!-- define the resolution -->
<supports-screens
android:anyDensity="false"
android:normalScreens="true"
android:largeScreens="true"
android:smallScreens="true"
android:resizeable="true">
</supports-screens>
<!-- permit camera usage if authorized -->
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.WAKE_LOCK" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_PHONE_STATE" />
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" />
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE" />
<!-- define Autofocus -->
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.camera.autofocus" />
<uses-sdk android:minSdkVersion="8" />
Now, Whenever I run the app on the emulator, the app force closes while loading for the first time and the following error occurs.
06-06 22:55:00.157: A /libc(19290): Fatal signal 11 (SIGSEGV) at 0x00000008 (code=1)
06-06 22:55:00.173: E /ti.dfusionmobile.tiComponent(19290): ON SURFACE CREATED
Any guidance on what and where I am going wrong would be appreciable. Thanks in Advance. | 00
| 00
| 11
| 00
| 00
| 00
|
Issue with arm and x86 in blackberry10 wikitude app
I am working on a Blackberry 10 app using wikitude SDK. I am using the documentation on the website and adding the libraries to the project . When the project is built , I am getting an error with "ntox86-ld" . I am new to this and cannot debug the error. The error is:
19:53:36 **** Build of configuration Simulator-Debug for project ARCascadesProject ****
make -j4 Simulator-Debug
make -C .//translations -f Makefile update
cd x86 && D:/bbndk/host_10_2_0_15/win32/x86/usr/bin/qmake -spec blackberry-x86-qcc ../ARCascadesProject.pro CONFIG+=debug_and_release CONFIG+=simulator
make[1]: Entering directory `D:/BB_10_Workspace/ARCascadesProject/translations'
D:/bbndk/host_10_2_0_15/win32/x86/usr/bin/lupdate ARCascadesProject.pro
Updating 'ARCascadesProject.ts'...
Found 1 source text(s) (0 new and 1 already existing)
make[1]: Leaving directory `D:/BB_10_Workspace/ARCascadesProject/translations'
make -C .//translations -f Makefile release
make[1]: Entering directory `D:/BB_10_Workspace/ARCascadesProject/translations'
D:/bbndk/host_10_2_0_15/win32/x86/usr/bin/lrelease ARCascadesProject.pro
Updating 'D:/BB_10_Workspace/ARCascadesProject/translations/ARCascadesProject.qm'...
Generated 0 translation(s) (0 finished and 0 unfinished)
Ignored 1 untranslated source text(s)
make[1]: Leaving directory `D:/BB_10_Workspace/ARCascadesProject/translations'
make -C ./x86 -f Makefile debug
make[1]: Entering directory `D:/BB_10_Workspace/ARCascadesProject/x86'
make -f Makefile.Debug
make[2]: Entering directory `D:/BB_10_Workspace/ARCascadesProject/x86'
D:/bbndk/host_10_2_0_15/win32/x86/usr/bin/moc.exe -DQT_NO_IMPORT_QT47_QML -DQ_OS_BLACKBERRY -DQT_DECLARATIVE_DEBUG -DQT_DECLARATIVE_LIB -DQT_CORE_LIB -DQT_SHARED -I../../../bbndk/target_10_2_0_1155/qnx6/usr/share/qt4/mkspecs/blackberry-x86-qcc -I../../ARCascadesProject -I../../../bbndk/target_10_2_0_1155/qnx6/usr/include/qt4/QtCore -I../../../bbndk/target_10_2_0_1155/qnx6/usr/include/qt4/QtDeclarative -I../../../bbndk/target_10_2_0_1155/qnx6/usr/include/qt4 -I../src -Io-g/.moc -I../../../bbndk/target_10_2_0_1155/qnx6/usr/include -I../../../bbndk/target_10_2_0_1155/qnx6/usr/include/freetype2 -I. -D__QNXNTO__ ../src/applicationui.hpp -o o-g/.moc/moc_applicationui.cpp
qcc -Vgcc_ntox86 -Wno-psabi -lang-c++ -fstack-protector -fstack-protector-all -g -Wno-psabi -Wall -W -D_REENTRANT -DQT_NO_IMPORT_QT47_QML -DQ_OS_BLACKBERRY -DQT_DECLARATIVE_DEBUG -DQT_DECLARATIVE_LIB -DQT_CORE_LIB -DQT_SHARED -I../../../bbndk/target_10_2_0_1155/qnx6/usr/share/qt4/mkspecs/blackberry-x86-qcc -I../../ARCascadesProject -I../../../bbndk/target_10_2_0_1155/qnx6/usr/include/qt4/QtCore -I../../../bbndk/target_10_2_0_1155/qnx6/usr/include/qt4/QtDeclarative -I../../../bbndk/target_10_2_0_1155/qnx6/usr/include/qt4 -I../src -Io-g/.moc -I../../../bbndk/target_10_2_0_1155/qnx6/usr/include -I../../../bbndk/target_10_2_0_1155/qnx6/usr/include/freetype2 -I. -x c++-header -c ../precompiled.h -o o-g/.obj/ARCascadesProject.gch/c++
qcc -Vgcc_ntox86 -c -Wc,-include -Wc,o-g/.obj/ARCascadesProject -Wno-psabi -lang-c++ -fstack-protector -fstack-protector-all -g -Wno-psabi -Wall -W -D_REENTRANT -DQT_NO_IMPORT_QT47_QML -DQ_OS_BLACKBERRY -DQT_DECLARATIVE_DEBUG -DQT_DECLARATIVE_LIB -DQT_CORE_LIB -DQT_SHARED -I../../../bbndk/target_10_2_0_1155/qnx6/usr/share/qt4/mkspecs/blackberry-x86-qcc -I../../ARCascadesProject -I../../../bbndk/target_10_2_0_1155/qnx6/usr/include/qt4/QtCore -I../../../bbndk/target_10_2_0_1155/qnx6/usr/include/qt4/QtDeclarative -I../../../bbndk/target_10_2_0_1155/qnx6/usr/include/qt4 -I../src -Io-g/.moc -I../../../bbndk/target_10_2_0_1155/qnx6/usr/include -I../../../bbndk/target_10_2_0_1155/qnx6/usr/include/freetype2 -I. -o o-g/.obj/applicationui.o ../src/applicationui.cpp
qcc -Vgcc_ntox86 -c -Wc,-include -Wc,o-g/.obj/ARCascadesProject -Wno-psabi -lang-c++ -fstack-protector -fstack-protector-all -g -Wno-psabi -Wall -W -D_REENTRANT -DQT_NO_IMPORT_QT47_QML -DQ_OS_BLACKBERRY -DQT_DECLARATIVE_DEBUG -DQT_DECLARATIVE_LIB -DQT_CORE_LIB -DQT_SHARED -I../../../bbndk/target_10_2_0_1155/qnx6/usr/share/qt4/mkspecs/blackberry-x86-qcc -I../../ARCascadesProject -I../../../bbndk/target_10_2_0_1155/qnx6/usr/include/qt4/QtCore -I../../../bbndk/target_10_2_0_1155/qnx6/usr/include/qt4/QtDeclarative -I../../../bbndk/target_10_2_0_1155/qnx6/usr/include/qt4 -I../src -Io-g/.moc -I../../../bbndk/target_10_2_0_1155/qnx6/usr/include -I../../../bbndk/target_10_2_0_1155/qnx6/usr/include/freetype2 -I. -o o-g/.obj/main.o ../src/main.cpp
qcc -Vgcc_ntox86 -c -Wc,-include -Wc,o-g/.obj/ARCascadesProject -Wno-psabi -lang-c++ -fstack-protector -fstack-protector-all -g -Wno-psabi -Wall -W -D_REENTRANT -DQT_NO_IMPORT_QT47_QML -DQ_OS_BLACKBERRY -DQT_DECLARATIVE_DEBUG -DQT_DECLARATIVE_LIB -DQT_CORE_LIB -DQT_SHARED -I../../../bbndk/target_10_2_0_1155/qnx6/usr/share/qt4/mkspecs/blackberry-x86-qcc -I../../ARCascadesProject -I../../../bbndk/target_10_2_0_1155/qnx6/usr/include/qt4/QtCore -I../../../bbndk/target_10_2_0_1155/qnx6/usr/include/qt4/QtDeclarative -I../../../bbndk/target_10_2_0_1155/qnx6/usr/include/qt4 -I../src -Io-g/.moc -I../../../bbndk/target_10_2_0_1155/qnx6/usr/include -I../../../bbndk/target_10_2_0_1155/qnx6/usr/include/freetype2 -I. -o o-g/.obj/moc_applicationui.o o-g/.moc/moc_applicationui.cpp
qcc -Vgcc_ntox86 -lang-c++ -Wl,-rpath-link,D:/bbndk/target_10_2_0_1155/qnx6/x86/lib -Wl,-rpath-link,D:/bbndk/target_10_2_0_1155/qnx6/x86/usr/lib -Wl,-rpath-link,D:/bbndk/target_10_2_0_1155/qnx6/x86/usr/lib/qt4/lib -o o-g/ARCascadesProject o-g/.obj/applicationui.o o-g/.obj/main.o o-g/.obj/moc_applicationui.o -LD:/bbndk/target_10_2_0_1155/qnx6/x86/lib -LD:/bbndk/target_10_2_0_1155/qnx6/x86/usr/lib -LD:/bbndk/target_10_2_0_1155/qnx6/x86/usr/lib/qt4/lib -LD:/bbndk/target_10_2_0_1155/qnx6//usr/lib/qt4/lib -L../Library/lib -lARchitectSDK -lARchitectLibrary -lgameplay -lscreen -lEGL -lGLESv2 -limg -lcrypto -lbbdevice -lcamapi -lfreetype -lmmrndclient -lpng -lbbdata -lwmm -lbb -lbbsystem -lbbcascades -lQtDeclarative -lQtScript -lQtSvg -lQtSql -lsqlite3 -lz -lQtXmlPatterns -lQtGui -lQtNetwork -lsocket -lQtCore -lm -lbps
D:\bbndk\host_10_2_0_15\win32\x86\usr\bin\ntox86-ld: skipping incompatible ../Library/lib\libARchitectSDK.a when searching for -lARchitectSDK
D:\bbndk\host_10_2_0_15\win32\x86\usr\bin\ntox86-ld: cannot find -lARchitectSDK
D:\bbndk\host_10_2_0_15\win32\x86\usr\bin\ntox86-ld: skipping incompatible ../Library/lib\libARchitectLibrary.a when searching for -lARchitectLibrary
D :\bbndk\host_10_2_0_15\win32\x86\usr\bin\ntox86-ld: cannot find -lARchitectLibrary
D:\bbndk\host_10_2_0_15\win32\x86\usr\bin\ntox86-ld: skipping incompatible ../Library/lib\libgameplay.a when searching for -lgameplay
D:\bbndk\host_10_2_0_15\win32\x86\usr\bin\ntox86-ld: cannot find -lgameplay
cc: D:/bbndk/host_10_2_0_15/win32/x86/usr/bin/ntox86-ld caught signal 5
make[2]: *** [o-g/ARCascadesProject] Error 1
make[2]: Leaving directory `D:/BB_10_Workspace/ARCascadesProject/x86'
make[1]: *** [debug] Error 2
make[1]: Leaving directory `D:/BB_10_Workspace/ARCascadesProject/x86'
make: *** [Simulator-Debug] Error 2
19:54:02 Build Finished (took 25s.979ms)
please look into the issue . I am new to this technology I am trying to implement Augmented Reality in bb10 using wikitude , please help I am stuck here? | 00
| 00
| 11
| 00
| 00
| 00
|
identifier float2 and float4 undefined (Oculus Rift)
For a school project we need to implement the oculus rift in a previously made DX9 engine. All is going well, but I am stuck at the distortion part of implementing the oculus.
I came to the part where I need to implement my shader for the barrel distortion and for that you use 'float2' and 'float4'. I can't seem to find these types in the OVR SDK or anywhere else. This results in 'undefined identifiers'.
Does anyone know where I can find these constant types?
Thanks! | 00
| 00
| 11
| 00
| 00
| 00
|
vuforia sample application code 100 error in airplane mode
I need to update an iOS app working with Vuforia.
The app compiles and runs normal, but I need to test airplane mode. In airplane mode, when trying to use the device's camera, I get this error:
Error initializing AR:Error Domain=vuforia_sample_application Code=100 "The operation couldn’t be completed."
I've been unable to find out what this error is.
What is error code 100? Also, does Vuforia need a web connection?
Thanks for your help. | 00
| 00
| 11
| 00
| 00
| 00
|
Getting "System": ambiguous symbol error using C++ CLI
I have a project (VS2012) that uses the 0.5.0.1 SDK. The SDK includes a System class under the OVR namespace (OVR::System). In a class that I wrote, I'm using ::System. This works and isn't what's giving me the problem. When I compile, I get error C2872: 'System': ambiguous symbol and the problematic files are typeinfo, xlocale, and xiosbase in C:....\Microsoft Visual Studio 12.0\VC\include. The error says that "System" could either be "System" or OVR::System. Is there a way around this? How can I get typeinfo, xlocale and xiosbase to use ::System and not OVR::System without changing the contents of the files (which I don't want to do)? | 00
| 00
| 11
| 00
| 00
| 00
|
how to interact with obj or collada-model in AFrame
<a-assets>
<a-mixin id="ring" geometry="primitive: ring; radiusOuter: 0.20;
radiusInner: 0.15;"
material="color: cyan; shader: flat"
cursor=" fuse: true"></a-mixin>
<a-asset-item id="mancloth" src="../models/man.obj"></a-asset-item>
<a-asset-item id="manclothmtl" src="../models/man.mtl"></a-asset-item>
</a-assets>
<a-entity camera look-controls wasd-controls><a-entity mixin="ring" position="0 0 -3">
<a-animation begin="cursor-click" easing="ease-in" attribute="scale"
fill="backwards" from="0.3 0.3 0.3" to="1 1 1"></a-animation>
<a-animation begin="cursor-fusing" easing="ease-in" attribute="scale"
fill="forwards" from="1 1 1" to="0.3 0.3 0.3"></a-animation>
</a-entity>
</a-entity>
<a-obj-model scale="1 1 1" src="#mancloth" mtl="#manclothmtl"></a-obj-model>
I use the camera to interact with the obj but aframe.js shows an error on line 57766. How can I solve this problem without changing aframe.js.
var intersectedEl = intersection.object.el;
intersectedEl.emit('raycaster-intersected', {el: el,intersection:intersection});
intersection.object is a THREE.Mesh, so intersection.object.el is undefined! | 00
| 00
| 11
| 00
| 00
| 00
|
Oculus OVR_CAPI.cpp error
I'm currently working on a project where I need to read the Oculus Rift DK2 sensors. I have searched the web for usable samples, sadly the only samples I can find cause me a lot of trouble with SDK vesions and such. I found a tutorial on how to implement some basic C++ code to read the pitch, roll & yaw. I used the SDK for Windows V1.8.0.
#include "stdafx.h"
#include <iostream>
#include "../../OculusSDK/LibOVR/Include/OVR_CAPI.h"
#include <thread>
#include <iomanip>
#define COLW setw(15)
using namespace std;
int main()
{
// Initialize our session with the Oculus HMD.
if (ovr_Initialize(nullptr) == ovrSuccess)
{
ovrSession session = nullptr;
ovrGraphicsLuid luid;
ovrResult result = ovr_Create(&session, &luid);
if (result == ovrSuccess)
{ // Then we're connected to an HMD!
// Let's take a look at some orientation data.
ovrTrackingState ts;
while (true)
{
ts = ovr_GetTrackingState(session, 0, true);
ovrPoseStatef tempHeadPose = ts.HeadPose;
ovrPosef tempPose = tempHeadPose.ThePose;
ovrQuatf tempOrient = tempPose.Orientation;
cout << "Orientation (x,y,z): " << COLW << tempOrient.x << ","
<< COLW << tempOrient.y << "," << COLW << tempOrient.z
<< endl;
// Wait a bit to let us actually read stuff.
std::this_thread::sleep_for(std::chrono::milliseconds(100));
}
ovr_Destroy(session);
}
ovr_Shutdown();
// If we've fallen through to this point, the HMD is no longer
// connected.
}
return 0;
}
there are (as far as I know) no problems with this part.
when I included the OVR_CAPI.h the OVR_CAPI.cpp magically appears in the folder where OVR_CAPI.h is located. this cpp file contains the following:
#include "stdafx.h"
#include "OVR_CAPI.h"
OVR_PUBLIC_FUNCTION(ovrResult) ovr_Initialize(const ovrInitParams * params)
{
return OVR_PUBLIC_FUNCTION(ovrResult)();
}
when I try to build it errors:"expected an expression" and " C2062(type 'int' unexpected)" occur,both on Line 6.
is anyone familiar with this problem or can someone give me advice on how to get started with Oculus software? | 00
| 00
| 11
| 00
| 00
| 00
|
OpenCV trying to integrate ARtoolkit with OpenCV
I am new to C++, openCV and Artoolkit
I am trying to build a motion tracker devices
right now, I am following the tutorial
https://artoolkit.org/blog/2016/05/opencv-with-artoolkit
However I meet some problem when I trying to implement this on SimpleTest on the Linux machine.
The error I get is like this:
"clang++ -c -O3 -fPIC -march=core2 -DHAVE_NFT=1 -I/usr/include/x86_64-linux-gnu -pthread -I/usr/include/gstreamer-0.10 -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -I/usr/include/libxml2 -I../../include simpleTest.c
clang: warning: treating 'c' input as 'c++' when in C++ mode, this behavior is deprecated In file included from simpleTest.c:79: In file included from ../../include/linux-x86_64/opencv2/opencv.hpp:59:/usr/include/opencv2/contrib/contrib.hpp:273:23: error: no template named
'vector'; did you mean 'std::vector'?"
simpleTest code
I added line like this
#include <linux-x86_64/opencv2/opencv.hpp>
#include <linux-x86_64/opencv2/opencv_modules.hpp>
using namespace std;
using namespace cv;
in the make file:
I add some this:
LIBS= -lARgsub -lARvideo -lAR -lARICP -lAR -lglut -lGLU -lGL -lX11 -lm -lpthread -ljpeg -pthread -lgstreamer-0.10 -lgobject-2.0 -lgmodule-2.0 -lgthread-2.0 -lxml2 -lglib-2.0 -ldc1394 -lraw1394 -lopencv_shape -lopencv_stitching -lopencv_objdetect -lopencv_superres -lopencv_videostab -lopencv_calib3d -lopencv_features2d -lopencv_highgui -lopencv_videoio -lopencv_imgcodecs -lopencv_video -lopencv_photo -lopencv_ml -lopencv_imgproc -lopencv_flann -lopencv_viz -lippicv -lopencv_core
and
CC = clang++ | 00
| 00
| 11
| 00
| 00
| 00
|
Argon.js: Error: A frame state has not yet been received
I am attempting to use argon.js server side so that I can convert lla coordinates to a predefined reference frame. I'm not rendering any graphics of course, I'm just using it to convert values. See SO question
Using Geo-coordintes Instead of Cartesian to Draw in Argon and A-Frame
for details.
Per that thread, I am trying to create a cesium entity for a fixed coordinate which I will later use to create other entities relative to it. When I do, everything runs until I get to the last line of the program var gtrefEntityPose = app.context.getEntityPose(gtrefEntity); where I receive Error: A frame state has not yet been received.
At first I thought this might be due to the setting the default reference entity to app.context.setDefaultReferenceFrame(app.context.localOriginEastUpSouth); since I did not have a local user yet due to it being server side. I looked up the documentation for setDefaultReferenceFrame as well as the possibility that I might need to use convertEntityReferenceFrame as well as the source code for each, but I am unable to make sense of it given my knowledge of the program.
I've put the error as well as my application code below.
Thanks for your help!
/home/path/to/folder/node_modules/@argonjs/argon/dist/argon.js:4323
if (!cesium_imports_1.defined(this.serializedFrameState)) throw new Error(
^
Error: A frame state has not yet been received
at ContextService.Object.defineProperty.get [as frame] (/home/path/to/folder/node_modules/@argonjs/argon/dist/argon.js:4323:89)
at ContextService.getTime (/home/path/to/folder/node_modules/@argonjs/argon/dist/argon.js:4343:32)
at ContextService.getEntityPose (/home/path/to/folder/node_modules/@argonjs/argon/dist/argon.js:4381:37)
at Object.<anonymous> (/home/path/to/folder/test.js:27:35)
at Module._compile (module.js:460:26)
at Object.Module._extensions..js (module.js:478:10)
at Module.load (module.js:355:32)
at Function.Module._load (module.js:310:12)
at Function.Module.runMain (module.js:501:10)
at startup (node.js:129:16)
Here is my code:
var Argon = require('@argonjs/argon');
var Cesium = Argon.Cesium;
var Cartesian3 = Cesium.Cartesian3;
var ConstantPositionProperty = Cesium.ConstantPositionProperty;
var ReferenceFrame = Cesium.ReferenceFrame;
var ReferenceEntity = Cesium.ReferenceEntity;
//var degToRad = THREE.Math.degToRad;
const app = Argon.init();
app.context.setDefaultReferenceFrame(app.context.localOriginEastUpSouth);
var data = { lla : { x : -84.398881, y : 33.778463, z : 276 }};
var gtref = Cartesian3.fromDegrees(data.lla.x, data.lla.y, data.lla.z);
var options = { position: new ConstantPositionProperty(gtref, ReferenceFrame.FIXED),
orientation: Cesium.Quaternion.IDENTITY
};
var gtrefEntity = new Cesium.Entity(options);
var gtrefEntityPose = app.context.getEntityPose(gtrefEntity); | 00
| 00
| 11
| 00
| 00
| 00
|
Aframe remove component inside for loop
I have a html template similar to the below code
<a-entity id="id1">
<a-entity template="src: t1.template;type:nunjucks">
</a-entity>
</a-entity>
t1.template
<a-entity id="id2">
{% for i in 4 %}
<a-entity template="src: t2.template; type: nunjucks"></a-entity>
{% endfor %}
</a-entity>
t2.template
<a-entity id="id3" myComponent="x:4">
<a-entity>...
<a-entity>...</a-entity>
</a-entity>
<a-entity>
Entity components are displayed in the screen as required. I now want to remove the entire id1 when user clicks on any of the 4 component in id3. My component code is as below
AFRAME.registerComponent('myComponent', {
schema: {
x: {type: 'number', default: 0}
},
update: function () {
//set some attribute for entities inside id3
//adding event listener to id3.
this.el.addEventListener('click', function () {
setTimeout(function () {
var categoryEl = scene.querySelectorAll('#id3');
totalCategory = categoryEl.length;
for(i=0;i<totalCategory;i++){
categoryEl[i].removeAttribute('myComponent');
removeAttributeCount++;
}
}, 1500);
});
},
remove: function () {
//To check whether component is removed from all element
if(removeAttributeCount == totalCategory){
var id1 = this.el.sceneEl.querySelector('#id1');
id1.parentNode.removeChild(id1);
}
}
});
I am getting error as
Uncaught TypeError: Cannot convert undefined or null to object
at NewComponent.remove (https://cdn.rawgit.com/donmccurdy/aframe-extras/v3.2.7/dist/aframe-extras.js:4742:16)
at HTMLElement.value (https://aframe.io/releases/0.5.0/aframe.js:71889:17)
at bound (https://aframe.io/releases/0.5.0/aframe.js:76993:17)
at Array.forEach (native)
at HTMLElement.value (https://aframe.io/releases/0.5.0/aframe.js:71567:36)
at NewComponent.remove (http://localhost:63342/myProj1HTML/Shop/ShopTrail-1/index.js:91:32)
at HTMLElement.value (https://aframe.io/releases/0.5.0/aframe.js:71889:17)
at HTMLElement.value (https://aframe.io/releases/0.5.0/aframe.js:71970:16)
at HTMLElement.value (https://aframe.io/releases/0.5.0/aframe.js:72095:14)
at HTMLElement.value (https://aframe.io/releases/0.5.0/aframe.js:72015:14)
All the elements are captured perfectly. But error occurs on removing the child from parent node.
Someone please help to get this working. Thanks in advance | 00
| 00
| 11
| 00
| 00
| 00
|
Unity Photon PUN connection error with Hololens
I'm working with Unity and the Photon Engine (PUN). I used the DemoSynchronization-Scene as an example and tested different devices. The connection with all other devices is totally fine but I get an error with the Hololens.
Connect() to 'ns.exitgames.com' failed: System.UnauthorizedAccessException: Access is denied.
A network capability is required to access this network resource
at Windows.Networking.Sockets.StreamSocket.ConnectAsync(EndpointPair endpointPair, SocketProtectionLevel protectionLevel)
at ExitGames.Client.Photon.SocketTcpNetFxCore.<ConnectSync>d__13.MoveNext()
Do I need to grant some specific network access on the Hololens? | 00
| 00
| 11
| 00
| 00
| 00
|
Display near by places in Location based Augmented reality Android
I wanted to try and code location based AR which shows places.I tried the android code in the following link Augmented reality android example and ended up with the following error in Redmi note 3(API 24) Nougat. I did some changes in main.xml
<?xml version="1.0" encoding="utf-8"?>
<FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="vertical"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:id="@+id/surface_overlay">
<SurfaceView android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:id="@+id/surface_camera" />
</FrameLayout>
Error
08-18 09:15:15.588 32701-32701/? E/AndroidRuntime: FATAL EXCEPTION: main
Process: advisory.arplaces, PID: 32701
java.lang.NullPointerException: Attempt to write to field 'advisory.arplaces.arview.DataView advisory.arplaces.arview.RadarView.view' on a null object reference
at advisory.arplaces.arview.DataView.draw(DataView.java:329)
at advisory.arplaces.arview.RadarMarkerView.onDraw(ARView.java:390) | 00
| 00
| 11
| 00
| 00
| 00
|
swift 4.0: Overriding 'prepare' must be as available as declaration it overrides
I was trying to integrate Apple's ARKit example app into my app. As ARKit is only an additional feature, so I need to support lower versions of iOS. I added @available(iOS 11.0, *) tag to all the ARKit example app classes...It almost works except this 1 error: "Overriding 'prepare' must be as available as declaration it overrides". Any idea how can I resolve this issue ?
Xcode error image | 00
| 00
| 11
| 00
| 00
| 00
|
Contextual type 'Any' cannot be used with dictionary literal
I'm getting that error, I'm new to swift and I would like to create a json out of the ARKit
let jsonObject: [String: Any] = [
"imageName": imageName,
"timeStamp": currentFrame.timestamp,
"cameraPos": dictFromVector3(positionFromTransform(currentFrame.camera.transform)),
"cameraEulerAngle": dictFromVector3(currentFrame.camera.eulerAngles),
"cameraTransform": arrayFromTransform(currentFrame.camera.transform),
"cameraIntrinsics": arrayFromTransform(currentFrame.camera.intrinsics),
"imageResolution": [
"width": currentFrame.camera.imageResolution.width,
"height": currentFrame.camera.imageResolution.height
],
"lightEstimate": currentFrame.lightEstimate?.ambientIntensity,
"ARPointCloud": [
"count": currentFrame.rawFeaturePoints?.count,
"points": arrayFromPointCloud(currentFrame.rawFeaturePoints)
]
] | 00
| 00
| 11
| 00
| 00
| 00
|
Aframe React dynamic string value
Just trying to get my head around using react and aframe together with the spread operator etc.
So I have an Entity using Meshline
<Entity meshline="lineWidth: 20; path: -2 -1 0, 0 -2 0, 2 -1; color: #E20049" />
I'm trying to make that path string dynamic with 2 objects i.e. something like
buttonConfig.position = {
x:config.position.x-3,
y:config.position.y+3,
z:config.position.z,
}
lineX = {
lineWidth : 20,
path: {...config.position,...buttonConfig.position},
color: '#ffffff',
}
But that's obviously not working because I get a
TypeError: value.split is not a function
How do I turn that object into a string of values via AFRAME/React? Or am I trying to be too clever here and should just build the string? | 00
| 00
| 11
| 00
| 00
| 00
|
the best practice to integrate arFragment (sceneForm) with existing Fragment APP
We have been working on adding AR features to our existing APP for a couple of months with limited progress. Very excited to read the recent development from google on sceneForm and arFragment. our current APP consists three Fragments and one of them will need AR features.
It looks straight forward to us,so We replaced the Fragment in our APP with arFragment. The build is successful and stopped during running with little information for debugging. any suggestion on the proper steps for us to upgrade from Fragment to arFragment? or maybe I missed the points of arFragment here?
in order to show the problem without for you to go through our length code (yet valuable to us), we constructed a dummy project based on the sample project from Google: HelloSceneform. Basically, we changed the static Fragment to dynamic Fragment. Only two files are changed and two files are added, which are attached thereafter. The modified project can be built successfully, but stopped when starting to run.
Thank you
Peter
/////// File modified, HelloSceneformActivity.java:
import android.support.v4.app.FragmentTransaction;
// private ArFragment arFragment;
private ItemOneFragment arFragment;
//arFragment = (ArFragment) getSupportFragmentManager().findFragmentById(R.id.ux_fragment);
arFragment = ItemOneFragment.newInstance();
//Manually displaying the first fragment - one time only
FragmentTransaction transaction = getSupportFragmentManager().beginTransaction();
transaction.replace(R.id.frame_layout, arFragment);
transaction.commit();
/////// File modified, activity_ux.xml:
<FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".HelloSceneformActivity">
</FrameLayout>
////// File added fragment_item_one.xml:
<FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:id="@+id/frame_layout"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".ItemOneFragment">
</FrameLayout>
/////// File added, ItemOneragment.java:
package com.google.ar.sceneform.samples.hellosceneform;
import android.os.Bundle;
import android.view.LayoutInflater;
import android.view.View;
import android.view.ViewGroup;
import com.google.ar.sceneform.ux.ArFragment;
public class ItemOneFragment extends ArFragment {
public static ItemOneFragment newInstance() {
ItemOneFragment fragment = new ItemOneFragment();
return fragment;
}
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
}
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
return inflater.inflate(R.layout.fragment_item_one, container, false);
}
} | 00
| 00
| 11
| 00
| 00
| 00
|
Oculus Go Developer Mode - Installed apk successfully but it does not show in Oculus
I got a Success message after doing adb install of the apk.
However, when I go to Library > Unknown Sources, I do not see my app.
The XR settings for the project has Oculus as the SDK.
What am I missing? | 00
| 00
| 11
| 00
| 00
| 00
|
How to get value of JArray dictionary by key
I have this json string:
[
[
{
"Antibiotic after diagnosis":[
"Azithromycin",
"Ciprofloxacin HCl",
"Ampicillin Sodium"
],
"City":[
"Tel Aviv",
"Jerusalem"
]
}
],
[
{
"Antibiotic after diagnosis":"Azithromycin",
"City":"Tel Aviv"
},
{
"Antibiotic after diagnosis":"Ciprofloxacin HCl",
"City":"Jerusalem"
}
]
]
I deserialized this string:
data = Newtonsoft.Json.JsonConvert.DeserializeObject<List<object>>("*json str*");
JParameters = data[0] as JArray;
Debug.Log(JParameters["Antibiotic after diagnosis"]);
But when i run the code it crashed on the line (Debug.Log(JParameters["Antibiotic after diagnosis"]);) with the following error:
"ArgumentException: Accessed JArray values with invalid key value: "Antibiotic after diagnosis". Int32 array index expected." | 00
| 00
| 11
| 00
| 00
| 00
|
Running Oculus Unity Sample Framework
Are there some pre-requisites to running the Oculus Sample Framework in Unity? I start with a new project, add the package from the store, and that's as far as I get. 51 compiler errors, mainly relating to missing OVR* namespaces.
Also "AssetImporter is referencing an asset from the previous import. This should not happen.". I thought the idea was that a sample framework would just work?
I'm running Unity 2018.2.11f1. | 00
| 00
| 11
| 00
| 00
| 00
|
End of preview. Expand
in Dataset Viewer.
- Downloads last month
- 41