text
stringlengths
144
17.3k
Results interpretation
class label
2 classes
Frameworks usage
class label
2 classes
Algorithms design
class label
2 classes
Algorithms implementation
class label
2 classes
Launching problem
class label
2 classes
Performance issue
class label
2 classes
Play audio on marker detect A-frame AR.JS I'm trying to play an audio source when a marker is detected with the A-frame and AR.JS libraries. Currently I have the following scene, camera, and marker: <a-scene embedded arjs='sourceType: webcam; debugUIEnabled: false;';> <a-marker preset="hiro"> <a-box position='0 0.5 0' material='color: black;'></a-box> </a-marker> <a-assets> <audio id="sound" src="audio.mp3" preload="auto"></audio> </a-assets> <a-entity sound="src: #sound" autoplay="false"></a-entity> <a-entity camera></a-entity> </a-scene> I initially tried the following: var entity = document.querySelector('[sound]'); if(document.querySelector("a-marker").object3D.visible == true){ entity.components.sound.playSound(); console.log("playing"); } else { entity.components.sound.pauseSound(); console.log("not playing"); } However, it doesn't work. Any ideas as to why this isn't or doesn't work? I'm not even seeing a console log, so it doesn't appear to run either.
11
00
00
00
00
00
Text recognition with Unity without using Vuforia I have checked many posts about text recognition with Unity. But every post is suggesting Vuforia. Also, Vuforia text recognition is deprecated. According to Vuforia it only detects serif and sans-serif fonts. I want my application to detect even handwritten text up to some extent. So my question is 'is there any SDK or anything else which will help to detect handwritten text too?'. Thanks in advance.
00
11
00
00
00
00
AR.js distorted perspective: How to use a personalized camera calibration file `camera_para.dat` so that the "floor" plane is horizontal? I'm looking into AR.js for an augmented reality use case where 3D objects do not appear directly on the hiro marker, but somewhere around the marker. When I view my AR scene through my iPhone 7 from the top, everything looks fine, but when I tilt my camera to get more perspective, AR.js doesn't apply the same perspective to the AR world so that distant virtual objects appear as if they were located on an inclined plane. I created an example page to illustrate this behaviour: When viewed from above, the grids match perfectly, but when viewed from the side, the planes mismatch. Are there any settings I can apply to configure AR.js (or ARToolKit, which it depends on)? Maybe there's a way to define the field of view of my webcam there? [EDIT] One week later, I would reword my question to: How can I use a device-specific camera_para.dat ARToolkit camera calibration file in AR.js without generating side effects such as a distorted rendering?
11
00
00
00
00
00
AR.JS custom marker I'm trying to use a custom marker for AR.JS. However after following the directions to create a custom marker and then change the marker presets, it still doesn't work. Any ideas on how to properly implement? <a-marker preset="custom" type="pattern" url="img/pattern-marker.patt"> <a-box position='0 0.5 0' material='color: black;' soundhandler></a-box> </a-marker> Is this not correct implementation in the marker? For reference, I used a very simple b/w circular image to test and it still did not work. Is there some other code that needs to be written to register a custom marker pattern?
11
00
00
00
00
00
How to capture video from web camera using unity? I am trying to capture video from web camera using unity and hololens. I found this example on the unity page here . I am pasting the code below. The light on the cam turns on, however it doesnt record. The VideoCapture.CreateAsync doesnt create a VideoCapture. So the delegate there is never executed. I saw this thread, however that was on. On the player settings the webcam and microphone capabilities are on. What could be the problem? using UnityEngine; using System.Collections; using System.Linq; using UnityEngine.XR.WSA.WebCam; public class VideoCaptureExample : MonoBehaviour { static readonly float MaxRecordingTime = 5.0f; VideoCapture m_VideoCapture = null; float m_stopRecordingTimer = float.MaxValue; // Use this for initialization void Start() { StartVideoCaptureTest(); Debug.Log("Start"); } void Update() { if (m_VideoCapture == null || !m_VideoCapture.IsRecording) { return; } if (Time.time > m_stopRecordingTimer) { m_VideoCapture.StopRecordingAsync(OnStoppedRecordingVideo); } } void StartVideoCaptureTest() { Resolution cameraResolution = VideoCapture.SupportedResolutions.OrderByDescending((res) => res.width * res.height).First(); Debug.Log(cameraResolution); float cameraFramerate = VideoCapture.GetSupportedFrameRatesForResolution(cameraResolution).OrderByDescending((fps) => fps).First(); Debug.Log(cameraFramerate); VideoCapture.CreateAsync(false, delegate (VideoCapture videoCapture) { Debug.Log("NULL"); if (videoCapture != null) { m_VideoCapture = videoCapture; Debug.Log("Created VideoCapture Instance!"); CameraParameters cameraParameters = new CameraParameters(); cameraParameters.hologramOpacity = 0.0f; cameraParameters.frameRate = cameraFramerate; cameraParameters.cameraResolutionWidth = cameraResolution.width; cameraParameters.cameraResolutionHeight = cameraResolution.height; cameraParameters.pixelFormat = CapturePixelFormat.BGRA32; m_VideoCapture.StartVideoModeAsync(cameraParameters, VideoCapture.AudioState.ApplicationAndMicAudio, OnStartedVideoCaptureMode); } else { Debug.LogError("Failed to create VideoCapture Instance!"); } }); } void OnStartedVideoCaptureMode(VideoCapture.VideoCaptureResult result) { Debug.Log("Started Video Capture Mode!"); string timeStamp = Time.time.ToString().Replace(".", "").Replace(":", ""); string filename = string.Format("TestVideo_{0}.mp4", timeStamp); string filepath = System.IO.Path.Combine(Application.persistentDataPath, filename); filepath = filepath.Replace("/", @"\"); m_VideoCapture.StartRecordingAsync(filepath, OnStartedRecordingVideo); } void OnStoppedVideoCaptureMode(VideoCapture.VideoCaptureResult result) { Debug.Log("Stopped Video Capture Mode!"); } void OnStartedRecordingVideo(VideoCapture.VideoCaptureResult result) { Debug.Log("Started Recording Video!"); m_stopRecordingTimer = Time.time + MaxRecordingTime; } void OnStoppedRecordingVideo(VideoCapture.VideoCaptureResult result) { Debug.Log("Stopped Recording Video!"); m_VideoCapture.StopVideoModeAsync(OnStoppedVideoCaptureMode); } } EDIT: The problem was that the API doesnt work on the Emulator
11
00
00
00
00
00
Unity3D / Vuforia AR Camera not showing camera feed on camera background I am evaluating Unity3d 2017.4.2f2 with Vuforia and Xcode 9.3 for an AR App and have the problem, that the camera background works in the simulator but not on the iPad pro with iOS 11.3,1. It seems to be a known problem and I checked nearly every solution including using XCode9.2 and copying the device config, deleting Metal from the Vuforia Settings, creating new AR Cameras, even the latest Unity3D beta and whatnot. The background does not show up and I get the following error: 2018-05-03 14:53:33.396112+0200 ProductName[2340:1694266] ERROR/AR(2340) 2018-05-04 14:53:33: CameraDevice::getCameraCalibration(): Failed to get camera calibration because the camera is not initialized. and cameraDeviceStartCamera 2018-05-03 14:53:33.460389+0200 ProductName[2340:1694266] ERROR/AR(2340) 2018-05-04 14:53:33: VideoBackgroundConfig with screen size of zero received, skipping config step If there is anything else I can do or that points me into the right direction is appreciated. many thanks in advance
11
00
00
00
00
00
How to reversely play a video in unity? I am implementing a VR 360 video viewer in Unity and need to implement a "play in reverse" function. Some approaches I tried (and obviously failed): Set the playbackSpeed field of the VideoPlayer in a negative number. Result: Video pauses Reversing the video frame by frame using the method suggested here: How to Rewind Video Player in Unity? Result: Super laggy playback Instead of using the default VideoPlayer, use Vive Media Player (which builds on top of ffmpeg) (https://assetstore.unity.com/packages/tools/video/vive-media-decoder-63938). Reverse the video frame by frame and force the renderer to render the frame at each call of Update() even if the state of the decoder is DecoderState.SEEK_FRAME. Code (Based on ViveMediaDecoder.cs from the asset): // Video progress is triggered using Update. Progress time would be set by nativeSetVideoTime. void Update() { Debug.Log(decoderState); switch (decoderState) { case DecoderState.START: if (isVideoEnabled) { // Prevent empty texture generate green screen.(default 0,0,0 in YUV which is green in RGB) if (useDefault && nativeIsContentReady(decoderID)) { getTextureFromNative(); setTextures(videoTexYch, videoTexUch, videoTexVch); useDefault = false; } // Update video frame by dspTime. double setTime = AudioSettings.dspTime - globalStartTime; // Normal update frame. if (setTime < videoTotalTime || videoTotalTime == -1.0f) { if (seekPreview && nativeIsContentReady(decoderID)) { setPause(); seekPreview = false; unmute(); } else { nativeSetVideoTime(decoderID, (float) setTime); GL.IssuePluginEvent(GetRenderEventFunc(), decoderID); } } else { isVideoReadyToReplay = true; } } if (nativeIsVideoBufferEmpty(decoderID) && !nativeIsEOF(decoderID)) { decoderState = DecoderState.BUFFERING; hangTime = AudioSettings.dspTime - globalStartTime; } break; case DecoderState.SEEK_FRAME: // // Code Added: // setTime = AudioSettings.dspTime - globalStartTime; nativeSetVideoTime(decoderID, (float) setTime); GL.IssuePluginEvent(GetRenderEventFunc(), decoderID); // // if (nativeIsSeekOver(decoderID)) { globalStartTime = AudioSettings.dspTime - hangTime; decoderState = DecoderState.START; if (lastState == DecoderState.PAUSE) { seekPreview = true; mute(); } } break; case DecoderState.BUFFERING: if (nativeIsVideoBufferFull(decoderID) || nativeIsEOF(decoderID)) { decoderState = DecoderState.START; globalStartTime = AudioSettings.dspTime - hangTime; } break; case DecoderState.PAUSE: case DecoderState.EOF: default: break; } if (isVideoEnabled || isAudioEnabled) { if ((!isVideoEnabled || isVideoReadyToReplay) && (!isAudioEnabled || isAllAudioChEnabled || isAudioReadyToReplay)) { decoderState = DecoderState.EOF; isVideoReadyToReplay = isAudioReadyToReplay = false; if (onVideoEnd != null) { onVideoEnd.Invoke(); } } } } - Result: Video pauses I currently work around this problem by generating a reversed video beforehand and switch to the reversed videos whenever the user wants to rewind. However, given that our project use more then one 360 video and allows custom videos, the time needed to generate the reversed videos and the lag in switching the videos are unacceptably long. Since the function is intuitively easy I think there must exist a much simpler solution. Have been stuck in this problem for a long time already so any pointers in solving the problem would be a big help!
11
00
00
00
00
00
How to do a hitTest on subnodes of node in sceneView? I have an ARSceneView and I use the detectionImages to find a plane so I can add nodes based on the Image that I detected. My Issue is that all the time the view is adding an invisible plane node above the nodes that I have added as subnodes in the "node of my Image" and when I add a TapGestureRecognizer to the sceneView I can't detect the node because it detects the plane. Sometimes it works, but when it adds the plane it doesn't work. When the plane is on scene my nodes are these I want my nodes to be like this.. ▿ 3 elements - 0 : | no child> - 1 : SCNNode: 0x1c41f4100 | light= - 2 : SCNNode: 0x1c43e1000 'DetectableChildrenParent' pos(-0.009494 -0.081575 -0.360098) rot(0.997592 -0.060896 0.033189 1.335512) | 2 children> and my 2 children to be the detectable nodes only.. but it always is like this ▿ 4 elements - 0 : SCNNode: 0x1c01fcc00 pos(-0.093724 0.119850 -0.060124) rot(-0.981372 0.172364 -0.084866 0.457864) scale(1.000000 1.000000 1.000000) | camera= | no child> - 1 : SCNNode: 0x1c01fc200 | light= | no child> - 2 : SCNNode: 0x1c01fd100 'DetectableChildrenParent' pos(-0.149971 0.050361 -0.365586) rot(0.996908 0.061535 -0.048872 1.379775) scale(1.000000 1.000000 1.000000) | 2 children> - 3 : SCNNode: 0x1c01f9f00 | geometry= | no child> and it always detect the last node. My code to detect is this @objc func addNodeToScene(withGestureRecognizer recognizer: UIGestureRecognizer) { let tapLocation = recognizer.location(in: self.sceneView) let hitTestResults = sceneView.hitTest(tapLocation) if let node = hitTestResults.first?.node { print(node.name ?? "") } } How can I run the hit test to check only the children of the DetectableChildrenParent?
11
00
00
00
00
00
ARKIT: Move Object with PanGesture (the right way) I've been reading plenty of StackOverflow answers on how to move an object by dragging it across the screen. Some use hit tests against .featurePoints some use the gesture translation or just keeping track of the lastPosition of the object. But honestly.. none work the way everyone is expecting it to work. Hit testing against .featurePoints just makes the object jump all around, because you dont always hit a featurepoint when dragging your finger. I dont understand why everyone keeps suggesting this. Solutions like this one work: Dragging SCNNode in ARKit Using SceneKit But the object doesnt really follow your finger, and the moment you take a few steps or change the angle of the object or the camera.. and try to move the object.. the x,z are all inverted.. and makes total sense to do that. I really want to move objects as good as the Apple Demo, but I look at the code from Apple... and is insanely weird and overcomplicated I cant even understand a bit. Their technique to move the object so beautifly is not even close to what everyone propose online. https://developer.apple.com/documentation/arkit/handling_3d_interaction_and_ui_controls_in_augmented_reality There's gotta be a simpler way to do it.
00
00
11
00
00
00
ARKit / SpriteKit - set pixelBufferAttributes to SKVideoNode or make transparent pixels in video (chroma-key effect) another way My goal is to present 2D animated characters in the real environment using ARKit. The animated characters are part of a video at presented in the following snapshot from the video: Displaying the video itself was achieved with no problem at all using the code: func view(_ view: ARSKView, nodeFor anchor: ARAnchor) -> SKNode? { guard let urlString = Bundle.main.path(forResource: "resourceName", ofType: "mp4") else { return nil } let url = URL(fileURLWithPath: urlString) let asset = AVAsset(url: url) let item = AVPlayerItem(asset: asset) let player = AVPlayer(playerItem: item) let videoNode = SKVideoNode(avPlayer: player) videoNode.size = CGSize(width: 200.0, height: 150.0) videoNode.anchorPoint = CGPoint(x: 0.5, y: 0.0) return videoNode } The result of this code is presented in the screen shot from the app below as expected: But as you can see, the background of the characters isn't very nice, so I need to make it vanish, in order to create the illusion of the characters actually standing on the horizontal plane surface. I'm trying to achieve this by making a chroma-key effect to the video. For those who are not familiar with chroma-key, this is name of the "green screen effect" seen sometimes on TV to make a color transparent. My approach to the chroma-key effect is to create a custom filter based on "CIColorCube" CIFilter, and then apply the filter to the video using AVVideoComposition. First, is the code for creating the filter: func RGBtoHSV(r : Float, g : Float, b : Float) -> (h : Float, s : Float, v : Float) { var h : CGFloat = 0 var s : CGFloat = 0 var v : CGFloat = 0 let col = UIColor(red: CGFloat(r), green: CGFloat(g), blue: CGFloat(b), alpha: 1.0) col.getHue(&h, saturation: &s, brightness: &v, alpha: nil) return (Float(h), Float(s), Float(v)) } func colorCubeFilterForChromaKey(hueAngle: Float) -> CIFilter { let hueRange: Float = 20 // degrees size pie shape that we want to replace let minHueAngle: Float = (hueAngle - hueRange/2.0) / 360 let maxHueAngle: Float = (hueAngle + hueRange/2.0) / 360 let size = 64 var cubeData = [Float](repeating: 0, count: size * size * size * 4) var rgb: [Float] = [0, 0, 0] var hsv: (h : Float, s : Float, v : Float) var offset = 0 for z in 0 ..< size { rgb[2] = Float(z) / Float(size) // blue value for y in 0 ..< size { rgb[1] = Float(y) / Float(size) // green value for x in 0 ..< size { rgb[0] = Float(x) / Float(size) // red value hsv = RGBtoHSV(r: rgb[0], g: rgb[1], b: rgb[2]) // TODO: Check if hsv.s > 0.5 is really nesseccary let alpha: Float = (hsv.h > minHueAngle && hsv.h < maxHueAngle && hsv.s > 0.5) ? 0 : 1.0 cubeData[offset] = rgb[0] * alpha cubeData[offset + 1] = rgb[1] * alpha cubeData[offset + 2] = rgb[2] * alpha cubeData[offset + 3] = alpha offset += 4 } } } let b = cubeData.withUnsafeBufferPointer { Data(buffer: $0) } let data = b as NSData let colorCube = CIFilter(name: "CIColorCube", withInputParameters: [ "inputCubeDimension": size, "inputCubeData": data ]) return colorCube! } And then the code for applying the filter to the video by modifying the function func view(_ view: ARSKView, nodeFor anchor: ARAnchor) -> SKNode? that I wrote earlier: func view(_ view: ARSKView, nodeFor anchor: ARAnchor) -> SKNode? { guard let urlString = Bundle.main.path(forResource: "resourceName", ofType: "mp4") else { return nil } let url = URL(fileURLWithPath: urlString) let asset = AVAsset(url: url) let filter = colorCubeFilterForChromaKey(hueAngle: 38) let composition = AVVideoComposition(asset: asset, applyingCIFiltersWithHandler: { request in let source = request.sourceImage filter.setValue(source, forKey: kCIInputImageKey) let output = filter.outputImage request.finish(with: output!, context: nil) }) let item = AVPlayerItem(asset: asset) item.videoComposition = composition let player = AVPlayer(playerItem: item) let videoNode = SKVideoNode(avPlayer: player) videoNode.size = CGSize(width: 200.0, height: 150.0) videoNode.anchorPoint = CGPoint(x: 0.5, y: 0.0) return videoNode } The code is supposed to replace all pixels of each frame of the video to alpha = 0.0 if the pixel color match the hue range of the background. But instead of getting transparent pixels I'm getting those pixels black as can be seen in the image below: Now, even though this is not the wanted effect, it does not surprise me, as I knew that this is the way iOS displays videos with alpha channel. But here is the real problem - When displaying a normal video in an AVPlayer, there is an option to add an AVPlayerLayer to the view, and to set pixelBufferAttributes to it, to let the player layer know we use a transparent pixel buffer, like so: let playerLayer = AVPlayerLayer(player: player) playerLayer.bounds = view.bounds playerLayer.position = view.center playerLayer.pixelBufferAttributes = [(kCVPixelBufferPixelFormatTypeKey as String): kCVPixelFormatType_32BGRA] view.layer.addSublayer(playerLayer) This code gives us a video with transparent background (GOOD!) but a fixed size and position (NOT GOOD...), as you can see in this screenshot: I want to achieve the same effect, but on SKVideoNode, and not on AVPlayerLayer. However, I can't find any way to set pixelBufferAttributes to SKVideoNode, and setting a player layer does not achieve the desired effect of ARKit as it is fixed in position. Is there any solution to my problem, or maybe is there another technique to achieve the same desired effect?
11
00
00
00
00
00
Augmented Reality in Android studio I have just started developing an android application based on augmented reality. Its main concept is to identify an object (here laptop) and place an image on top of it. I would like to know an easy solution or method to solve and the way to detect the laptop sufrace. and also which SDK can be used. I've had some sites referred to me, such as: https://www.raywenderlich.com/158580/augmented-reality-android-googles-face-api https://developers.google.com/ar/develop/java/enable-arcore
00
11
11
11
00
00
Facebook AR Studio - Importing Animations I'm trying to import a 3d object in .fbx format into the Facebook AR Studio. The object was created and animated in Blender. The AR Studio does not support 'Baked' animations, but I'm wondering if it's possible to somehow import the animation into the studio, as opposed to scripting the entire animation, due to the complexity of the object (number of bones and motions). I'm new to AR and 3d modeling so any input or direction is appreciated.
00
00
00
00
11
00
A-Frame Animation not starting on event emission I am trying to animate a plane in A-frame with either the a-animation component or the aframe-animation-component after the user clicks on a text box. I want to rotate the plane to a vertical position so that the user falls. The animation works without an event listener, but it fails to fire when I add the startEvents or begin attributes to the a-entity containing the plane. The event called fallclick is emitted from a text a-entity called fallSelector when it is clicked. Here is a section of my HTML code: <a-entity static-body id="plane" rotation="-90 0 0" geometry="primitive: plane; width: 2; height: 2" <!-- animation="property: rotation; to: 0 0 0; delay: 500; startEvents: fallclick;" --> > <a-animation attribute="rotation" to="0 0 0" delay="500" dur="1000" begin="fallclick" ></a-animation> </a-entity> Here is the the JavaScript event code: fallSelector.addEventListener('click', function () { fallSelector.emit('fallclick'); console.log(fallSelector + ': Emit fallclick'); });
11
00
00
00
00
00
Looking for a way in aframe to rotate and scale a model via touch when rendered over a marker I'm loading this Collada (DAE) model with aframe 0.8.2 and using aframe-ar to display it over a Hiro marker: <meta name="viewport" content="width=device-width, user-scalable=no, minimum-scale=1.0, maximum-scale=1.0"> <script src="https://aframe.io/releases/0.8.2/aframe.min.js"></script> <script src="https://cdn.rawgit.com/jeromeetienne/AR.js/1.5.5/aframe/build/aframe-ar.js"></script> <body style='margin : 0px; overflow: hidden;'> <a-scene embedded arjs='trackingMethod: best; debugUIEnabled: false;'> <!--a-marker type='pattern' url='https://rawgit.com/germanviscuso/AR.js/master/data/data/patt.gafas'--> <a-marker preset='hiro'> <a-collada-model src="url(https://aframe.io/aframe/examples/showcase/shopping/man/man.dae)"></a-collada-model> </a-marker> <a-camera-static/> </a-scene> </body> Codepen: https://codepen.io/germanviscuso/pen/KRMgwz I would like to know how to add controls to rotate it on its Y axis (with respect to the marker) by using swipe gestures on a mobile phone browser and to scale the model dynamically when doing pinch gestures. Ideally it would be nice if it also works with the mouse/touchpad when I'm testing in my laptop but touch on the phone is enough. Can universal-controls handle this? Any example that I can see? This has to work while the model is being rendered dynamically with respect to the marker (AR tracking). Any help is appreciated, thanks!
00
00
00
11
00
00
How do I write to the SD card on Nougat/Android 7.0 (VR friendly)? I'm looking for a way to make my VR Android app (for Samsung Galaxy S7 and S9) able to write files to the SD card (e.g. by downloading a .zip file and unzipping it there). The app is mostly going to be used by people, who don't know a lot about Android/smartphones and don't want to have to deal with anything complicated (not necessarily seniors but close enough), that's why I want to make it as easy as possible for them, which also includes making choices myself (and setting it up for them) instead of showing complicated dialogs. Special requirements: The files must not be deleted when the app is uninstalled - that's why I can's use getExternalFilesDirs() (Storage Volume). The folder everything happens in has to be easily accessable, so the zip files can be transfered to the SD card on your PC too (instead of downloading them through the app in case they are too big) without having to go down a huge amount of levels and remembering a long folder path. Using Storage Access Framework isn't a good alternative either because not only is picking folders nothing that's especially VR friendly but it also requires knowledge about folders most of the users simply won't have and/or won't want to deal with every time they open my app. But: If there was a way to only show this once (on the very first start after installing the app) and maybe even set the root folder to the folder I chose, so the users only have to hit "accept", that would be worth a try (unless there's an easier way). Yes, I did set the android.permission.WRITE_EXTERNAL_STORAGE permission and also enabled the "force allow apps on external storage" developer setting but trying to write to the SD card still throws an "Access Denied" exception. Are there any others ways to write to the SD card that are VR friendly?
00
00
11
11
00
00
Vuforia -- how to only play one video when an image is recognized I have an AR application that has 4 image targets, and each image target has a corresponding video that is played on top of it when it is recognized. The problem is that even though the correct animation is displayed whenever the image is recognized, all of the other animations are played as well, even though they are not displayed. Thus, whenever I move from one image target to another, the video is displayed, but it is already halfway through its runtime. I want each video to start whenever its parent image target is recognized, but only that specific video. I need control of the trackable behavior for each image target/video. how can i do that? I know that the scripts I need to look at in Unity are DefaultTrackableEventHandler.cs or mediaPlayerCtrl.cs (which is a scrip that I am using for my video manager(video player) that I got from the Easy Movie Texture asset. What is the code that I need to write in those scripts to make it happen. Here is the DefaultTrackableEventHandler.cs code: using UnityEngine; using Vuforia; /// <summary> /// A custom handler that implements the ITrackableEventHandler interface. /// </summary> public class DefaultTrackableEventHandler : MonoBehaviour, ITrackableEventHandler { #region PRIVATE_MEMBER_VARIABLES protected TrackableBehaviour mTrackableBehaviour; #endregion // PRIVATE_MEMBER_VARIABLES #region UNTIY_MONOBEHAVIOUR_METHODS protected virtual void Start() { mTrackableBehaviour = GetComponent<TrackableBehaviour>(); if (mTrackableBehaviour) mTrackableBehaviour.RegisterTrackableEventHandler(this); } #endregion // UNTIY_MONOBEHAVIOUR_METHODS #region PUBLIC_METHODS /// <summary> /// Implementation of the ITrackableEventHandler function called when the /// tracking state changes. /// </summary> public void OnTrackableStateChanged( TrackableBehaviour.Status previousStatus, TrackableBehaviour.Status newStatus) { if (newStatus == TrackableBehaviour.Status.DETECTED || newStatus == TrackableBehaviour.Status.TRACKED || newStatus == TrackableBehaviour.Status.EXTENDED_TRACKED) { Debug.Log("Trackable " + mTrackableBehaviour.TrackableName + " found"); OnTrackingFound(); } else if (previousStatus == TrackableBehaviour.Status.TRACKED && newStatus == TrackableBehaviour.Status.NOT_FOUND) { Debug.Log("Trackable " + mTrackableBehaviour.TrackableName + " lost"); OnTrackingLost(); } else { // For combo of previousStatus=UNKNOWN + newStatus=UNKNOWN|NOT_FOUND // Vuforia is starting, but tracking has not been lost or found yet // Call OnTrackingLost() to hide the augmentations OnTrackingLost(); } } #endregion // PUBLIC_METHODS #region PRIVATE_METHODS protected virtual void OnTrackingFound() { var rendererComponents = GetComponentsInChildren<Renderer>(true); var colliderComponents = GetComponentsInChildren<Collider>(true); var canvasComponents = GetComponentsInChildren<Canvas>(true); // Enable rendering: foreach (var component in rendererComponents) component.enabled = true; // Enable colliders: foreach (var component in colliderComponents) component.enabled = true; // Enable canvas': foreach (var component in canvasComponents) component.enabled = true; } protected virtual void OnTrackingLost() { var rendererComponents = GetComponentsInChildren<Renderer>(true); var colliderComponents = GetComponentsInChildren<Collider>(true); var canvasComponents = GetComponentsInChildren<Canvas>(true); // Disable rendering: foreach (var component in rendererComponents) component.enabled = false; // Disable colliders: foreach (var component in colliderComponents) component.enabled = false; // Disable canvas': foreach (var component in canvasComponents) component.enabled = false; } #endregion // PRIVATE_METHODS }
11
00
00
00
00
00
ARKit - eulerAngles of Transform Matrix 4x4 In my project I have a transform and I want to get the orientation of that transform in eulersAngles. How do I get the eulerAngles from a transform of 4x4 matrix in Swift language? I need at least the Pitch, I may need the Yaw and I might need the Roll.
00
00
00
11
00
00
ARCore Tablet Support I would like to know whether ARCore supports any tablets natively as of the latest update. I have found the list of supported phones here, however no mentions of tablets seem to be made. Other similar questions on StackOverflow seem to have had no conclusive answer.
00
11
00
00
00
00
Android app works fine on phone 7.0 and crashes on tablet running 7.1.2 on load I have an app built in Unity and Vuforia and it is running very nicely on my ZTE phone that is running Android 7.1.1. However when I try to run it on my brand new Android tablet (just arrived from Amazon today) running 7.1.2 the app crashes almost immediately upon load. I read through a few other threads but I may not be a good enough user yet to know what I am doing wrong... I set the device to debug mode and ran Android Studio. Below is my logcat error result 05-09 16:34:54.189 3531-3613/? E/libEGL: call to OpenGL ES API with no current context (logged once per thread) 05-09 16:35:03.331 3531-3613/? E/AR: CameraDevice::getCameraCalibration(): Failed to get camera calibration because the camera is not initialized. 05-09 16:35:03.827 3531-3613/? E/Unity: Could not initialize the tracker. (Filename: /Users/builduser/buildslave/unity/build/artifacts/generated/common/runtime/DebugBindings.gen.cpp Line: 51) 05-09 16:35:05.032 8029-5080/? E/CdxFlvParser.c: <__CdxFlvParserProbe:4031>: [40;31mFlvProbe failed.[0m 05-09 16:35:05.032 8029-5080/? E/CdxAviParser: <__CdxAviParserProbe:1247>: [40;31mAviProbe failed.[0m 05-09 16:35:05.032 8029-5080/? E/CdxPmpParser: <PmpParserProbe:1127>: [40;31mIt is not pmp-2.0, and is not supported.[0m 05-09 16:35:05.051 8029-5085/? E/CdxFlvParser.c: <__CdxFlvParserProbe:4031>: [40;31mFlvProbe failed.[0m 05-09 16:35:05.051 8029-5085/? E/CdxAviParser: <__CdxAviParserProbe:1247>: [40;31mAviProbe failed.[0m 05-09 16:35:05.052 8029-5085/? E/CdxPmpParser: <PmpParserProbe:1127>: [40;31mIt is not pmp-2.0, and is not supported.[0m 05-09 16:35:05.070 8029-5090/? E/CdxFlvParser.c: <__CdxFlvParserProbe:4031>: [40;31mFlvProbe failed.[0m 05-09 16:35:05.070 8029-5090/? E/CdxAviParser: <__CdxAviParserProbe:1247>: [40;31mAviProbe failed.[0m 05-09 16:35:05.070 8029-5090/? E/CdxPmpParser: <PmpParserProbe:1127>: [40;31mIt is not pmp-2.0, and is not supported.[0m 05-09 16:35:06.328 3531-3613/? E/AR: VideoBackgroundConfig with screen size of zero received, skipping config step 05-09 16:35:15.210 3531-4863/? E/AndroidRuntime: FATAL EXCEPTION: Thread-20 Process: com.FractalEncrypt.EVM, PID: 3531 java.lang.Error: FATAL EXCEPTION [Thread-20] Unity version : 2017.3.1f1 Device model : F5CS LTD Fusion5_108 Device fingerprint: Fusion5/Fusion5_108/Fusion5_108:7.1.2/N2G48B/20171222:user/release-keys Caused by: java.lang.ArrayIndexOutOfBoundsException: length=3; index=3 at android.net.ConnectivityManager.getNetworkInfo(ConnectivityManager.java:902) 05-09 16:35:19.477 8838-8915/? E/AW PowerHAL: Error opening /sys/class/devfreq/sunxi-ddrfreq/dsm/scene: No such file or directory 05-09 16:35:24.010 8029-5081/? E/awplayer: <PlayerStop:852>: [40;31minvalid stop operation, player already in stopped status.[0m 05-09 16:35:24.014 8029-5081/? E/awplayer: <PlayerStop:852>: [40;31minvalid stop operation, player already in stopped status.[0m 05-09 16:35:24.018 8029-5086/? E/awplayer: <PlayerStop:852>: [40;31minvalid stop operation, player already in stopped status.[0m 05-09 16:35:24.020 8029-5086/? E/awplayer: <PlayerStop:852>: [40;31minvalid stop operation, player already in stopped status.[0m 05-09 16:35:24.024 8029-5091/? E/awplayer: <PlayerStop:852>: [40;31minvalid stop operation, player already in stopped status.[0m 05-09 16:35:24.026 8029-5091/? E/awplayer: <PlayerStop:852>: [40;31minvalid stop operation, player already in stopped status.[0m 05-09 16:35:31.533 9607-9607/? E/PhoneInterfaceManager: [PhoneIntfMgr] getCarrierPackageNamesForIntent: No UICC 05-09 16:35:31.534 9607-9607/? E/PhoneInterfaceManager: [PhoneIntfMgr] getCarrierPackageNamesForIntent: No UICC 05-09 16:35:32.699 9934-23574/? E/NetworkScheduler: Unrecognised action provided: android.intent.action.PACKAGE_REMOVED 05-09 16:35:33.165 11583-11583/? E/Finsky: [1] com.google.android.finsky.wear.t.a(3): onConnectionFailed: ConnectionResult{statusCode=API_UNAVAILABLE, resolution=null, message=null} 05-09 16:35:33.253 9607-9607/? E/PhoneInterfaceManager: [PhoneIntfMgr] getCarrierPackageNamesForIntent: No UICC 05-09 16:35:33.328 9934-23574/? E/NetworkScheduler: Unrecognised action provided: android.intent.action.PACKAGE_REPLACED 05-09 16:35:34.355 6873-6932/? E/YouTube: Failed delayed event dispatch, no dispatchers. 05-09 16:35:34.783 6873-6928/? E/art: The String#value field is not present on Android versions >= 6.0 05-09 16:35:41.217 7357-7357/? E/art: The String#value field is not present on Android versions >= 6.0 05-09 16:35:48.132 9934-10879/? E/aghj: Phenotype API error. Event a < a: "LOCAL.com.google.android.agsa.QSB" b: 0 e: "" f: "" g: 0 h: 0 i: "" j: 26 > e: # bjyw@f5129a56 , EventCode: 7 -- metadata{ service_id: 51 } aggc: 29505: No config packages for log source, or config package not registered at aghv.b(:com.google.android.gms@12673019@12.6.73 (040300-194189626):4) at aghj.a(:com.google.android.gms@12673019@12.6.73 (040300-194189626):40) at aghj.a(:com.google.android.gms@12673019@12.6.73 (040300-194189626):108) at oqt.run(:com.google.android.gms@12673019@12.6.73 (040300-194189626):40) at azva.run(:com.google.android.gms@12673019@12.6.73 (040300-194189626):2) at ovb.run(:com.google.android.gms@12673019@12.6.73 (040300-194189626):27) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1133) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:607) at pbc.run(:com.google.android.gms@12673019@12.6.73 (040300-194189626)) at java.lang.Thread.run(Thread.java:761) 05-09 16:35:48.134 9934-10879/? E/AsyncOperation: serviceID=51, operation=GetExperimentTokensOperationCall OperationException[Status{statusCode=No config packages for log source, or config package not registered, resolution=null}] at aghj.a(:com.google.android.gms@12673019@12.6.73 (040300-194189626):53) at aghj.a(:com.google.android.gms@12673019@12.6.73 (040300-194189626):108) at oqt.run(:com.google.android.gms@12673019@12.6.73 (040300-194189626):40) at azva.run(:com.google.android.gms@12673019@12.6.73 (040300-194189626):2) at ovb.run(:com.google.android.gms@12673019@12.6.73 (040300-194189626):27) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1133) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:607) at pbc.run(:com.google.android.gms@12673019@12.6.73 (040300-194189626)) at java.lang.Thread.run(Thread.java:761) 05-09 16:35:57.613 9934-10879/? E/aghj: Phenotype API error. Event a < a: "com.google.android.gms.devicedoctor" b: 208 e: "" f: " com.google.android.gms.devicedoctor 208 7 1493864118 com.google.android.gms.devicedoctor 1493864118" g: 0 h: 0 i: "com.google.android.gms.devicedoctor" j: 1862 > e: # bjyw@f5129982 , EventCode: 5 -- metadata{ service_id: 51 } aggc: 29501: Stale snapshot (change count changed) at aghm.a(:com.google.android.gms@12673019@12.6.73 (040300-194189626):13) at aghl.b(:com.google.android.gms@12673019@12.6.73 (040300-194189626):1) at aghj.a(:com.google.android.gms@12673019@12.6.73 (040300-194189626):40) at aghj.a(:com.google.android.gms@12673019@12.6.73 (040300-194189626):108) at oqt.run(:com.google.android.gms@12673019@12.6.73 (040300-194189626):40) at azva.run(:com.google.android.gms@12673019@12.6.73 (040300-194189626):2) at ovb.run(:com.google.android.gms@12673019@12.6.73 (040300-194189626):27) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1133) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:607) at pbc.run(:com.google.android.gms@12673019@12.6.73 (040300-194189626)) at java.lang.Thread.run(Thread.java:761) 05-09 16:35:57.620 9934-10879/? E/AsyncOperation: serviceID=51, operation=CommitToConfigurationOperationCall OperationException[Status{statusCode=Stale snapshot (change count changed), resolution=null}] at aghj.a(:com.google.android.gms@12673019@12.6.73 (040300-194189626):53) at aghj.a(:com.google.android.gms@12673019@12.6.73 (040300-194189626):108) at oqt.run(:com.google.android.gms@12673019@12.6.73 (040300-194189626):40) at azva.run(:com.google.android.gms@12673019@12.6.73 (040300-194189626):2) at ovb.run(:com.google.android.gms@12673019@12.6.73 (040300-194189626):27) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1133) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:607) at pbc.run(:com.google.android.gms@12673019@12.6.73 (040300-194189626)) at java.lang.Thread.run(Thread.java:761) 05-09 16:36:03.370 10754-11039/? E/AsyncOpDispatcher: Unable to get current module info in ModuleManager created with non-module Context 05-09 16:36:05.768 9934-10880/? E/aghj: Phenotype API error. Event a < a: "LOCAL.com.google.android.agsa.QSB" b: 0 e: "" f: "" g: 0 h: 0 i: "" j: 33 > e: # bjyw@f5129a56 , EventCode: 7 -- metadata{ service_id: 51 } aggc: 29505: No config packages for log source, or config package not registered at aghv.b(:com.google.android.gms@12673019@12.6.73 (040300-194189626):4) at aghj.a(:com.google.android.gms@12673019@12.6.73 (040300-194189626):40) at aghj.a(:com.google.android.gms@12673019@12.6.73 (040300-194189626):108) at oqt.run(:com.google.android.gms@12673019@12.6.73 (040300-194189626):40) at azva.run(:com.google.android.gms@12673019@12.6.73 (040300-194189626):2) at ovb.run(:com.google.android.gms@12673019@12.6.73 (040300-194189626):27) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1133) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:607) at pbc.run(:com.google.android.gms@12673019@12.6.73 (040300-194189626)) at java.lang.Thread.run(Thread.java:761) 05-09 16:36:05.781 9934-10880/? E/AsyncOperation: serviceID=51, operation=GetExperimentTokensOperationCall OperationException[Status{statusCode=No config packages for log source, or config package not registered, resolution=null}] at aghj.a(:com.google.android.gms@12673019@12.6.73 (040300-194189626):53) at aghj.a(:com.google.android.gms@12673019@12.6.73 (040300-194189626):108) at oqt.run(:com.google.android.gms@12673019@12.6.73 (040300-194189626):40) at azva.run(:com.google.android.gms@12673019@12.6.73 (040300-194189626):2) at ovb.run(:com.google.android.gms@12673019@12.6.73 (040300-194189626):27) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1133) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:607) at pbc.run(:com.google.android.gms@12673019@12.6.73 (040300-194189626)) at java.lang.Thread.run(Thread.java:761) 05-09 16:36:11.601 8838-8882/? E/BatteryStatsService: power: Platform does not even have one low power mode 05-09 16:36:11.612 8838-8882/? E/BatteryStatsService: no controller energy info supplied 05-09 16:36:11.620 8838-8882/? E/BatteryStatsService: no controller energy info supplied 05-09 16:36:11.623 8838-8875/? E/KernelUidCpuTimeReader: Failed to read uid_cputime: /proc/uid_cputime/show_uid_stat (No such file or directory) 05-09 16:36:11.674 9607-9607/? E/PhoneInterfaceManager: [PhoneIntfMgr] queryModemActivityInfo: Empty response 05-09 16:36:11.682 8838-8882/? E/KernelUidCpuTimeReader: Failed to read uid_cputime: /proc/uid_cputime/show_uid_stat (No such file or directory) 05-09 16:36:11.687 8838-8882/? E/BatteryStatsService: modem info is invalid: ModemActivityInfo{ mTimestamp=0 mSleepTimeMs=0 mIdleTimeMs=0 mTxTimeMs[]=[0, 0, 0, 0, 0] mRxTimeMs=0 mEnergyUsed=0} 05-09 16:36:16.397 9607-9607/? E/PhoneInterfaceManager: [PhoneIntfMgr] getCarrierPackageNamesForIntent: No UICC 05-09 16:36:19.658 9934-9431/? E/SeTransactionSyncTask: Error retrieving account java.lang.IllegalStateException: No current tap-and-pay account at alty.b(:com.google.android.gms@12673019@12.6.73 (040300-194189626):3) at alty.a(:com.google.android.gms@12673019@12.6.73 (040300-194189626):1) at amnv.a(:com.google.android.gms@12673019@12.6.73 (040300-194189626):28) at com.google.android.gms.tapandpay.gcmtask.TapAndPayGcmTaskChimeraService.b(:com.google.android.gms@12673019@12.6.73 (040300-194189626):1) at com.google.android.gms.tapandpay.gcmtask.TapAndPayGcmTaskChimeraService.a(:com.google.android.gms@12673019@12.6.73 (040300-194189626):1) at com.google.android.gms.tapandpay.phenotype.PhenotypeCommitIntentOperation.onHandleIntent(:com.google.android.gms@12673019@12.6.73 (040300-194189626):47) at com.google.android.chimera.IntentOperation.onHandleIntent(:com.google.android.gms@12673019@12.6.73 (040300-194189626):2) at dbn.a(:com.google.android.gms@12673019@12.6.73 (040300-194189626):8) at nam.a(:com.google.android.gms@12673019@12.6.73 (040300-194189626):9) at dbs.run(:com.google.android.gms@12673019@12.6.73 (040300-194189626):10) at dbp.run(:com.google.android.gms@12673019@12.6.73 (040300-194189626):9) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1133) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:607) at java.lang.Thread.run(Thread.java:761) 05-09 16:36:36.482 10754-11039/? E/AsyncOpDispatcher: Unable to get current module info in ModuleManager created with non-module Context 05-09 16:36:42.161 9934-10144/? E/SeTransactionSyncTask: Error retrieving account java.lang.IllegalStateException: No current tap-and-pay account at alty.b(:com.google.android.gms@12673019@12.6.73 (040300-194189626):3) at alty.a(:com.google.android.gms@12673019@12.6.73 (040300-194189626):1) at amnv.a(:com.google.android.gms@12673019@12.6.73 (040300-194189626):28) at com.google.android.gms.tapandpay.gcmtask.TapAndPayGcmTaskChimeraService.b(:com.google.android.gms@12673019@12.6.73 (040300-194189626):1) at com.google.android.gms.tapandpay.gcmtask.TapAndPayGcmTaskChimeraService.a(:com.google.android.gms@12673019@12.6.73 (040300-194189626):1) at com.google.android.gms.tapandpay.phenotype.PhenotypeCommitIntentOperation.onHandleIntent(:com.google.android.gms@12673019@12.6.73 (040300-194189626):47) at com.google.android.chimera.IntentOperation.onHandleIntent(:com.google.android.gms@12673019@12.6.73 (040300-194189626):2) at dbn.a(:com.google.android.gms@12673019@12.6.73 (040300-194189626):8) at nam.a(:com.google.android.gms@12673019@12.6.73 (040300-194189626):9) at dbs.run(:com.google.android.gms@12673019@12.6.73 (040300-194189626):10) at dbp.run(:com.google.android.gms@12673019@12.6.73 (040300-194189626):9) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1133) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:607) at java.lang.Thread.run(Thread.java:761) 05-09 16:36:50.320 9607-9607/? E/PhoneInterfaceManager: [PhoneIntfMgr] getCarrierPackageNamesForIntent: No UICC 05-09 16:36:50.320 9607-9607/? E/PhoneInterfaceManager: [PhoneIntfMgr] getCarrierPackageNamesForIntent: No UICC 05-09 16:36:51.097 9934-10845/? E/NetworkScheduler: Unrecognised action provided: android.intent.action.PACKAGE_REMOVED 05-09 16:36:51.479 11583-11583/? E/Finsky: [1] com.google.android.finsky.wear.t.a(3): onConnectionFailed: ConnectionResult{statusCode=API_UNAVAILABLE, resolution=null, message=null} 05-09 16:36:51.606 9934-10845/? E/NetworkScheduler: Unrecognised action provided: android.intent.action.PACKAGE_REPLACED 05-09 16:36:52.322 10170-10170/? E/art: The String#value field is not present on Android versions >= 6.0 05-09 16:37:02.519 9607-9607/? E/PhoneInterfaceManager: [PhoneIntfMgr] getCarrierPackageNamesForIntent: No UICC 05-09 16:37:06.827 12278-12414/? E/bt_h5: h5_timeout_handler 05-09 16:37:06.828 12278-12441/? E/bt_h5: retransmitting (1) pkts, retransfer count(0) 0x0B 0x20 0x07 0x01 0x12 0x00 0x12 0x00 0x01 0x00 05-09 16:38:07.072 8838-9262/? E/ConnectivityService: RemoteException caught trying to send a callback msg for NetworkRequest [ LISTEN id=11, [ Capabilities: INTERNET&NOT_RESTRICTED&TRUSTED&FOREGROUND] ] 05-09 16:38:40.569 8031-8955/? E/NetlinkEvent: NetlinkEvent::FindParam(): Parameter 'UID' not found 05-09 16:39:12.397 8031-8955/? E/NetlinkEvent: NetlinkEvent::FindParam(): Parameter 'UID' not found 05-09 16:42:11.851 12278-12414/? E/bt_h5: h5_timeout_handler 05-09 16:42:11.851 12278-12441/? E/bt_h5: retransmitting (1) pkts, retransfer count(0) 0x18 0x20 0x00 05-09 16:43:09.689 8031-8955/? E/NetlinkEvent: NetlinkEvent::FindParam(): Parameter 'UID' not found 05-09 16:43:14.494 8031-8955/? E/NetlinkEvent: NetlinkEvent::FindParam(): Parameter 'UID' not found
11
00
00
00
00
00
Position object relative to camera in SceneView I've tried modifying the SceneForm example to position an object relative to the camera as soon as the scene is touched, but no objects appear. What am I missing? Checked the docs and YouTube videos to no avail. Any ideas would be appreciated! Code below (in Kotlin): arFragment!!.arSceneView.setOnTouchListener { view : View, event: MotionEvent -> println("Touch!") val andy = Node() andy.setParent(arFragment?.arSceneView?.scene?.camera) andy.localPosition = Vector3(1.0f, 1.0f, 1.0f) andy.renderable = andyRenderable true }
11
00
00
00
00
00
How to save a captured photo from unity in Android Image Gallery? I have a valid script that takes a photo with the AR camera and saves it in data files of Android, like so : screenShotName = "Dorot" + System.DateTime.Now.ToString("yyyy-MM-dd-HH-mm-ss") + ".png"; File.WriteAllBytes(Application.persistentDataPath + "/" + screenShotName, screenshot.EncodeToPNG()); GameObject.Find("MainCanvas").GetComponent<Canvas>().enabled = true; How can I change the persistentDataPath with the right path of Android Image Gallery. Thanks.
00
00
00
11
00
00
How to develop Augmented Reality application for Android I would like to develop an augmented reality application on a HTC Nexus One mobile phone for android using Flash Professional CS5 and Adobe AIR 2.5. I found a couple of online sources showing how to develop AR application using webcam and Flash and i have found it very useful to follow and understand the basics of AR. For example: Augmented Reality using webcam and Flash http://www.adobe.com/devnet/flash/articles/augmented_reality.html Introduction to Augmented Reality http://www.gotoandlearn.com/play.php?id=105 I have also watched other videos regarding AIR for Android Applications from gotoandlearn website and i did all of the successfully such as: Air for Android - Part 1 Air for Android - Part 2 Publishing AIR for Android Applications AIR for Android GPU Acceleration Introduction to Augmented Reality However, i didn't manage to get it work on my android phone (doing nothing and run very slow). I would like to ask a few questions on the following: 1) To develop an Augmented Reality application on android, is this done using the same method as the one's above? 2) Do I need to use any other software other than those that is showing on the video and adobe air 2.5? 3) Do you know of any other sources/reading material that are relevant and may be of help? Thank you
11
00
00
00
00
11
ARKit vs. ARCore vs. Vuforia vs. D'Fusion Mobile vs. Layar SDK [closed] Closed. This question is opinion-based. It is not currently accepting answers. Want to improve this question? Update the question so it can be answered with facts and citations by editing this post. Closed 7 years ago. Improve this question I would be interested to know, where are the advantages and disadvantages of each vision-based mobile Augmented Reality Frameworks? For what should be decide in which case? Would you choose Vuforia in any case, because it is free and without branding? What are important features are missing in one of the frameworks? Are there limits on the free version of Metaio SDK (except branding and Metaio splash-screen)? I think these are the most important Frameworks to support iOS and Android. I know that metaio support movie textures and MD2 (Animation) Export and Vuforia not (at least not in the basic state). Edit: Here is a 3 hour session reviewed the best Mobile AR SDKs in the market and how to get started with them: Tutorial: Top 10 Augmented Reality SDKs for Developers You should also check out ARLab from Augmented Reality Lab S.L. This has different AR-SDKs for AR Browser, Image Matching, Image Tracking, 3D engine, Virtual buttons. But this is not free. Wikitude's ARchitect SDK has an Vuforia-Extension and Blackberry 10 Support. This could also be very interesting. The Layar SDK is now available for iOS and Android with 3D, animation, AR Video and QR-Code Scanner DARAM also appears a good SDK for Android, iOS, Windows 8 and Mac. ARPA has a Unity-Plugin and a Google Glass SDK. Here is a good comparison chart for Augmented Reality SDKs and frameworks Apple has acquired Metaio. Metaio's future uncertain. (May 28, 2015) Magic Leap Announces Its Augmented Reality Developer SDK for Unreal and Unity Vuforia now has paid licensing and ability to demo apps without a watermark on their no-cost Starter plan - it now appears only during the first app launch in a particular day. This is to support developers who want to do demos to clients without showing the watermark. (May 6, 2015), Qualcomm sell its Vuforia business to PTC (Oct 12, 2015) ARKit iOS 11 introduces ARKit, a new framework that allows you to easily create unparalleled augmented reality experiences for iPhone and iPad. By blending digital objects and information with the environment around you, ARKit takes apps beyond the screen, freeing them to interact with the real world in entirely new ways. (Juni 2017) 8th Wall XR is the world's first AR platform that works on all commonly available iOS and Android phones and integrates seamlessly with ARKit and ARCore. (August 2017) ARCore is Google’s answer to Apple’s ARKit. It is designed to bring augmented reality capabilities to all Android phones in a relatively simple way. It’s also replacing the Project Tango, which required specialized hardware to run.
00
11
00
00
00
00
How to create VR Video player using Google Cardboard SDK for Unity I just downloaded Google Cardboard SDK for unity. I am fine and able to create VR project. Setup is fine and everything is working fine. I am noob at VR Apps. Just stepped in VR Apps. I am planing to create my own VR Enabled Video Player for android, Just like the default Google Cardboard Youtube player. Can any one suggest me a link or can guide me in developing this app.
00
00
11
11
00
00
Unity Zxing QR code scanner integration I need to integrate Zxing with vuforia to make a QR code scanning app in Unity? I have no idea how to integrate Zxing with Vuforia in unity.Can someone guide me how to do this?I have Zxing .dll files and Vuforia unity package.Thanks in Advance.
00
11
00
00
00
00
thread priority security exception make sure the apk is signed I am trying to build my project in oculus gear vr using unity5 , but when I deploy my app I get the below error thread priority security exception make sure the apk is signed I have even created keystore any suggestion why I might be facing this error in gear vr
11
00
00
00
00
00
How to enable WebVR on Google Chrome? I am trying to create a WebVR scene. For this task, I want to enable WebVR on Google Chrome. My OS is Windows 8. I open flags using chrome://flags/. WebVR is not there. How can I enable it?
00
11
00
00
00
00
Using Resources Folder in Unity I have am developing a HoloLens project that needs to reference .txt files. I have the files stored in Unity's 'Resources' folder and have them working perfectly fine (when run via Unity): string basePath = Application.dataPath; string metadataPath = String.Format(@"\Resources\...\metadata.txt", list); // If metadata exists, set title and introduction strings. if (File.Exists(basePath + metadataPath)) { using (StreamReader sr = new StreamReader(new FileStream(basePath + metadataPath, FileMode.Open))) { ... } } However, when building the program for HoloLens deployment, I am able to run the code but it doesn't work. None of the resources show up and when examining the HoloLens Visual Studio solution (created by selecting build in Unity), I don't even see a resources or assets folder. I am wondering if I am doing something wrong or if there was a special way to deal with such resources. Also with image and sound files... foreach (string str in im) { spriteList.Add(Resources.Load<Sprite>(str)); } The string 'str' is valid; it works absolutely fine with Unity. However, again, it's not loading anything when running through the HoloLens.
11
00
00
00
00
00
How to determine whether a SteamVR_TrackedObject is a Vive Controller or a Vive Tracker What is the best way to determine whether a SteamVR_TrackedObject is a Vive Controller and a Vive Tracker? When 0 Controllers and 1 Tacker is paired: The Tracker is taken as Controller (right) of the CameraRig. When 1 Controller and 1 Tacker is paired: The Tracker is set to Device 2. When 2 Controllers and 1 Tacker is paired: Creating a third SteamVR_TrackedObject and placing it in the CameraRig's objects array. Also when a controller looses tracking so does the tracker. In each scenario the Tracker ends up being a different SteamVR_TrackedObject.index. What is the best way to check if a SteamVR_TrackedObject is a Tracker, or to find which index the Tracker is?
00
00
00
11
00
00
Hls video streaming on iOS/Safari I am trying to stream hls on safari iOS with Aframe that has three.js under the hood. But the video shows a black screen with just the audio playing. The video src is of type .m3u8. I tried to read through a lot of related posts but none seem to have a proper solution. Is it some kind of a wishful thinking getting HLS & WebGL to play on iOS? If not, can some one please help me with a solution. A couple of discussions on the issues that are available on github: HLS m3u8 video streaming HLS on Safari
11
00
00
00
00
00
How to detect when a marker is found in AR.js I'm trying to detect when a marker if found/lost in ar.js, while using a-frame. From what I see in the source code, when the marker is found, a 'getMarker' event should be fired, moreover artoolkit seems to dispatch a markerFound event. I tried to listen to those events on the <a-scene>, or on the <a-marker>, but it seems I'm either mistaken, or i need to get deeper to the arController, or arToolkit objects. When i log the scene, or the marker, i only get references to the attributes, which don't seem to have the above objects attached.(like marker.arController, or marker.getAttribute('artoolkitmarker').arController) Did anybody tried this and have any tips how to do this ?
11
00
00
00
00
00
How to create custom marker in Ar.js? I Was wondering how things work in Ar.js , But i was stuck with creating custom Markers and custom shapes , is there any way to customize things. this is What i have got things to getting started. <script src="https://aframe.io/releases/0.6.0/aframe.min.js"></script> <!-- include ar.js for A-Frame --> <script src="https://jeromeetienne.github.io/AR.js/aframe/build/aframe-ar.js"></script> <body style='margin : 0px; overflow: hidden;'> <a-scene embedded arjs> <!-- create your content here. just a box for now --> <a-box position='0 0.5 0' material='opacity: 0.5;'></a-box> <!-- define a camera which will move according to the marker position --> <a-marker-camera preset='hiro'></a-marker-camera> </a-scene> </body> This is simple example for getting started
00
00
00
11
00
00
Are there any limitations in Vuforia compared to ARCore and ARKit? I am a beginner in the field of augmented reality, working on applications that create plans of buildings (floor plan, room plan, etc with accurate measurements) using a smartphone. So I am researching about the best AR SDK which can be used for this. There are not many articles pitting Vuforia against ARCore and ARKit. Please suggest the best SDK to use, pros and cons of each.
00
11
00
00
00
00
How to update your Unity project Input to SteamVR 2.0? I have some Unity Scenes that worked well with the previous version of the SteamVR plugin, since there is a new version of the plugin "SteamVR Unity Plugin 2.0" my code no longer works. https://steamcommunity.com/games/250820/announcements/detail/1696059027982397407 I deleted the "SteamVR" folder before importing the new one as the documentation say. But I get this errors: error CS0246: The type or namespace name `SteamVR_Controller' could not be found. Are you missing an assembly reference? error CS0246: The type or namespace name `SteamVR_TrackedController' could not be found. Are you missing an assembly reference? So I can see that this classes are deprecated: private SteamVR_Controller.Device device; private SteamVR_TrackedController controller; controller = GetComponent<SteamVR_TrackedController>(); What is the new way to get the Input by code using the SteamVR 2.0 plugin?
11
00
00
00
00
00
"Unable to start Oculus XR Plugin" error when starting game in Unity I am currently developing a 3d VR game for Oculus Quest headset on Unity (v 2019.3.6f1) Whenever I start the game in Unity editor (by pressing "Play" button) I get the following errors : Unable to start Oculus XR Plugin. Failed to load display subsystem. Failed to load input subsystem. XR Plugin is installed and updated to the latest version (1.2.0) : What could be the cause of those errors ? Thanks in advance for your answers.
00
00
00
00
11
00
3D models (.obj or .fbx or .glb) do not load in Hololens 2 3d-viewer I am exporting simple 3D models as .obj, .fbx or .glb using blender, and succesfully display them in the 3D viewer app of a hololens 2. As soon as the models are more complex (for example created by makehuman), the exports cannot be displayed in Hololens 2 3d viewer app.The error message says that the models are not optimised for windows mixed reality. I found some documentation on the limitation of Hololens 1 .glb files. But I cannot find the specification for hololens 2 and the three file formats. In addition: Should I reduced the complexity in the blender models, or during the export, or are there even tools to post-process 3D models for Hololens 2 / Windwos mixed reality?
11
00
00
00
00
00
Read controller data from outside VR Rig with action based input system I'm having trouble using the new action based input system in Unity OpenXR. With the old (device based) input system it was possible to retrieve an input device object from outside the XR Rig using the InputDevices.GetDeviceAtXRNode(<node>) function. For example: This is what I would do in the old system to retrieve position data of the right hand controller: InputDevices.GetDeviceAtXRNode(XRNode.RightHand).TryGetFeatureValue(CommonUsages.devicePosition, out Vector3 position); InputDevices.GetDeviceAtXRNode(XRNode.RightHand).TryGetFeatureValue(CommonUsages.deviceRotation, out Quaternion rotation); Unfortunatly I can't find a way to do the same thing with the new action based input system. All the documentation that I could find on this topic refers to the old way of doing it. It appears that this method does not work anymore. So, is there a way to retrieve an input device from outside the XR Rig using the new action based input system? In case it helps: My Unity version is 2020.3.4f1 and I'm using the OpenXR plugin version 1.0.3. Any help is greatly appreciated.
00
00
00
11
00
00
What in this custom shader made in Unity could possibly create issues or not translate to openGLCore/GLES? Attempting to create a custom shader for an application that I'm building for Magic Leap 1. The shaders purpose is to only render the shadows cast onto the material. The shader works as intended in the unity editor but throws errors on mldb logs and the material turns pink which in my experience means issues with the renderpipeline/shader. I assume the issue is with some syntax or Unity Semantic that isn't easily translated to openGLCore/GLES by Unity's built in HLSLcc which compiles shaders into said language. The following is the shader from top to bottom: Shader "Custom/SecondaryShadowShader" { Properties { [NoScaleOffset] _MainTex("Texture", 2D) = "grey" {} _Color("Main Color", Color) = (1,1,1,1) _ShadowColor("Shadow Color", Color) = (1,1,1,1) _Cutoff("Alpha cutoff", Range(0,1)) = 0.5 } SubShader { Pass { Tags {"LightMode" = "ForwardBase"} CGPROGRAM #pragma vertex vert #pragma fragment frag fixed4 _ShadowColor; #include "UnityCG.cginc" #include "Lighting.cginc" // compile shader into multiple variants, with and without shadows // (we don't care about any lightmaps yet, so skip these variants) #pragma multi_compile_fwdbase nolightmap nodirlightmap nodynlightmap novertexlight #pragma multi_compile_instancing // shadow helper functions and macros #include "AutoLight.cginc" struct appdata { float2 texcoord : TEXCOORD0; float3 normal : NORMAL; float4 vertex : POSITION; UNITY_VERTEX_INPUT_INSTANCE_ID }; struct v2f { float2 uv : TEXCOORD0; SHADOW_COORDS(1) // put shadows data into TEXCOORD1 fixed3 diff : COLOR0; fixed3 ambient : COLOR1; float4 pos : SV_POSITION; UNITY_VERTEX_OUTPUT_STEREO }; v2f vert(appdata v) { v2f o; UNITY_SETUP_INSTANCE_ID(v); UNITY_INITIALIZE_OUTPUT(v2f, o); UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(o); o.pos = UnityObjectToClipPos(v.vertex); o.uv = v.texcoord; half3 worldNormal = UnityObjectToWorldNormal(v.normal); half nl = max(0, dot(worldNormal, _WorldSpaceLightPos0.xyz)); o.diff = nl * _LightColor0.rgb; o.ambient = ShadeSH9(half4(worldNormal,1)); // compute shadows data TRANSFER_SHADOW(o) return o; } UNITY_DECLARE_SCREENSPACE_TEXTURE(_MainTex); fixed4 frag(v2f i) : SV_Target { UNITY_SETUP_STEREO_EYE_INDEX_POST_VERTEX(i); fixed4 col = UNITY_SAMPLE_SCREENSPACE_TEXTURE(_MainTex, i.uv); //Insert fixed shadow = SHADOW_ATTENUATION(i); clip(-shadow); fixed3 lighting = i.diff * shadow + i.ambient; lighting += _ShadowColor * lighting; col.rgb *= lighting; return col; } ENDCG } UsePass "Legacy Shaders/VertexLit/SHADOWCASTER" } }
11
00
00
00
00
00
How to keep nodes on AR Kit Scene after opening UIImagePicker I bumped into a problem when all my previously added nodes(for example) text disappearing after opening Photo Library or Camera to add new photo-node. Is there any way to fix it? My alerts code: let alert = UIAlertController(title: nil, message: nil, preferredStyle: .actionSheet) alert.addAction(UIAlertAction(title: "Add Text", style: .default , handler:{ (UIAlertAction) in self.addTextView.isHidden = false self.inputTextField.becomeFirstResponder() })) alert.addAction(UIAlertAction(title: "Choose from Library", style: .default , handler:{ (UIAlertAction) in alert.dismiss(animated: true, completion: nil) let picker = UIImagePickerController () picker.delegate = self picker.allowsEditing = true picker.sourceType = UIImagePickerControllerSourceType.photoLibrary self.present (picker, animated: true , completion: nil) })) alert.addAction(UIAlertAction(title: "Take a Photo", style: .default , handler:{ (UIAlertAction) in alert.dismiss(animated: true, completion: nil) let picker = UIImagePickerController () picker.delegate = self picker.allowsEditing = true picker.sourceType = UIImagePickerControllerSourceType.camera self.present (picker, animated: true , completion: nil) })) alert.addAction(UIAlertAction(title: "Cancel", style: .cancel, handler:{ (UIAlertAction) in // ACTION })) self.present(alert, animated: true, completion: nil)
11
00
00
00
00
00
ARKit, detect when tracking is available I'm looking for a way to detect when spacial tracking is "working/not working" in ARKit, i.e when ARKit has enough visual information to start the 3d spacial tracking. In other apps i've tried, the user gets prompted to look around with the phone/camera to resume space tracking if ARKit doesn't get enough information from the camera. I have even seen apps with a progress bar showing how much more the user needs to move the device around to resume tracking. Would a good way to detect if tracking is available to check how many rawFeaturePoints the ARSessions current frame has? E.g if the current frame has more than say 100 rawFeaturePoints, we can assume that spacial tracking is working. Would this be a good approach, or is there built in functionality or better way in ARKit to detect if spacial tracking is working what i don't know about?
00
00
11
11
00
00
AR.js help/trouble using custom AR markers I need some help with using custom AR markers with AR.js. We have been running into some problems getting objects to initialize over the markers, after downloading the .patt file from the custom markers generator page. Everything is being tested client side on a Node.js server, but every time the webcam turns on nothing appears over an image of a python logo. Code below: <html> <head> <script src="https://aframe.io/releases/0.8.0/aframe.min.js"></script> <!-- <script src="https://cdn.rawgit.com/jeromeetienne/AR.js/1.5.0/aframe/build/aframe-ar.js"> </script> --> <script src="aframe-ar.js"></script> </head> <body style="margin : 0px; overflow: hidden;"> <a-scene embedded arjs="sourceType: webcam;"> <a-marker preset="custom" type="pattern" url="pattern-marker.patt"> <a-box position="0 0.5 0" material="opacity: 0.5;"></a-box> </a-marker> </a-scene> </body> </html> This conditional is in that aframe-ar.js file that is local to add the custom preset for the python image marker. We are using Google Chrome. else if( _this.data.preset === 'custom' ){ markerParameters.type = 'pattern' markerParameters.patternUrl = _this.data.patternUrl; markerParameters.markersAreaEnabled = false } I have just been using a local Node.js server to test, also I should mention that the default Hiro marker works but the custom image markers don't. If anyone can point me in the right direction there is a reward! Contact me for details. Cheers.
11
00
00
00
00
00
ARCore Tracking Marker I am developing an Android app that will simply track a real life marker and draw some graphics on top of that. I have just started looking into frameworks... Is ARCore suitable for this or Vuforia ?
00
11
00
00
00
00
the best practice to integrate arFragment (sceneForm) with existing Fragment APP We have been working on adding AR features to our existing APP for a couple of months with limited progress. Very excited to read the recent development from google on sceneForm and arFragment. our current APP consists three Fragments and one of them will need AR features. It looks straight forward to us,so We replaced the Fragment in our APP with arFragment. The build is successful and stopped during running with little information for debugging. any suggestion on the proper steps for us to upgrade from Fragment to arFragment? or maybe I missed the points of arFragment here? in order to show the problem without for you to go through our length code (yet valuable to us), we constructed a dummy project based on the sample project from Google: HelloSceneform. Basically, we changed the static Fragment to dynamic Fragment. Only two files are changed and two files are added, which are attached thereafter. The modified project can be built successfully, but stopped when starting to run. Thank you Peter /////// File modified, HelloSceneformActivity.java: import android.support.v4.app.FragmentTransaction; // private ArFragment arFragment; private ItemOneFragment arFragment; //arFragment = (ArFragment) getSupportFragmentManager().findFragmentById(R.id.ux_fragment); arFragment = ItemOneFragment.newInstance(); //Manually displaying the first fragment - one time only FragmentTransaction transaction = getSupportFragmentManager().beginTransaction(); transaction.replace(R.id.frame_layout, arFragment); transaction.commit(); /////// File modified, activity_ux.xml: <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".HelloSceneformActivity"> </FrameLayout> ////// File added fragment_item_one.xml: <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/frame_layout" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".ItemOneFragment"> </FrameLayout> /////// File added, ItemOneragment.java: package com.google.ar.sceneform.samples.hellosceneform; import android.os.Bundle; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import com.google.ar.sceneform.ux.ArFragment; public class ItemOneFragment extends ArFragment { public static ItemOneFragment newInstance() { ItemOneFragment fragment = new ItemOneFragment(); return fragment; } @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); } @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { return inflater.inflate(R.layout.fragment_item_one, container, false); } }
00
00
00
00
11
00
How to create assetbundle at run time? Is there a tool which converts 3D models to asset bundles ? or is it possible to convert them at run time ? I am able to convert them using this Loading 3d object from external server in Unity3D . But My application allows end users to upload their own 3D models and view them in application by downloading at run time . p.s : I am very new to unity .
00
11
00
11
00
00
ARCore with OpenCV I am truing to create an app that uses ArCore Camera and face detection. Does anyone have any example of ARCore running OpenCV Object and Face detection? Thanks
00
00
11
11
00
00
How to disable fps display in ar js/ three.js? I am new to AR.js and working on a demo as shown in the tutorial. I am trying to remove this fps view on top left corner. Reference Image: My Code is: ////////////////////////////////////////////////////////////////////////////////// // Init ////////////////////////////////////////////////////////////////////////////////// // init renderer var renderer = new THREE.WebGLRenderer({ // antialias : true, alpha: true }); renderer.setClearColor(new THREE.Color('lightgrey'), 0) // renderer.setPixelRatio( 1/2 ); renderer.setSize(window.innerWidth, window.innerHeight); renderer.domElement.style.position = 'absolute' renderer.domElement.style.top = '0px' renderer.domElement.style.left = '0px' document.body.appendChild(renderer.domElement); // array of functions for the rendering loop var onRenderFcts = []; // init scene and camera var scene = new THREE.Scene(); ////////////////////////////////////////////////////////////////////////////////// // Initialize a basic camera ////////////////////////////////////////////////////////////////////////////////// // Create a camera var camera = new THREE.Camera(); scene.add(camera); //////////////////////////////////////////////////////////////////////////////// // handle arToolkitSource //////////////////////////////////////////////////////////////////////////////// var arToolkitSource = new THREEx.ArToolkitSource({ // to read from the webcam sourceType: 'webcam', // to read from an image // sourceType : 'image', // sourceUrl : THREEx.ArToolkitContext.baseURL + '../data/images/img.jpg', // to read from a video // sourceType : 'video', // sourceUrl : THREEx.ArToolkitContext.baseURL + '../data/videos/headtracking.mp4', }) arToolkitSource.init(function onReady() { onResize() }) // handle resize window.addEventListener('resize', function() { onResize() }) function onResize() { arToolkitSource.onResize() arToolkitSource.copySizeTo(renderer.domElement) if (arToolkitContext.arController !== null) { arToolkitSource.copySizeTo(arToolkitContext.arController.canvas) } } //////////////////////////////////////////////////////////////////////////////// // initialize arToolkitContext //////////////////////////////////////////////////////////////////////////////// // create atToolkitContext var arToolkitContext = new THREEx.ArToolkitContext({ cameraParametersUrl: THREEx.ArToolkitContext.baseURL + './assets/images/camera_para.dat', detectionMode: 'mono', maxDetectionRate: 30, canvasWidth: 80 * 3, canvasHeight: 60 * 3, }) // initialize it arToolkitContext.init(function onCompleted() { // copy projection matrix to camera camera.projectionMatrix.copy(arToolkitContext.getProjectionMatrix()); }) // update artoolkit on every frame onRenderFcts.push(function() { if (arToolkitSource.ready === false) return arToolkitContext.update(arToolkitSource.domElement) }) //////////////////////////////////////////////////////////////////////////////// // Create a ArMarkerControls //////////////////////////////////////////////////////////////////////////////// var markerRoot = new THREE.Group scene.add(markerRoot) var artoolkitMarker = new THREEx.ArMarkerControls(arToolkitContext, markerRoot, { type: 'pattern', patternUrl: THREEx.ArToolkitContext.baseURL + './assets/images/patt.hiro' // patternUrl : THREEx.ArToolkitContext.baseURL + '../data/data/patt.kanji' }) // build a smoothedControls var smoothedRoot = new THREE.Group() scene.add(smoothedRoot) var smoothedControls = new THREEx.ArSmoothedControls(smoothedRoot, { lerpPosition: 0.4, lerpQuaternion: 0.3, lerpScale: 1, }) onRenderFcts.push(function(delta) { smoothedControls.update(markerRoot) }) ////////////////////////////////////////////////////////////////////////////////// // add an object in the scene ////////////////////////////////////////////////////////////////////////////////// var arWorldRoot = smoothedRoot // add a torus knot var geometry = new THREE.CubeGeometry(1, 1, 1); var material = new THREE.MeshNormalMaterial({ transparent: true, opacity: 0.5, side: THREE.DoubleSide }); var mesh = new THREE.Mesh(geometry, material); mesh.position.y = geometry.parameters.height / 2 arWorldRoot.add(mesh); // var geometry = new THREE.TorusKnotGeometry(0.3, 0.1, 64, 16); // var material = new THREE.MeshNormalMaterial(); // var mesh = new THREE.Mesh(geometry, material); // mesh.position.y = 0.5 // arWorldRoot.add(mesh); onRenderFcts.push(function() { // mesh.rotation.x += 0.1 }) ////////////////////////////////////////////////////////////////////////////////// // render the whole thing on the page ////////////////////////////////////////////////////////////////////////////////// var stats = new Stats(); document.body.appendChild(stats.dom); // render the scene onRenderFcts.push(function() { renderer.render(scene, camera); stats.update(); }) // run the rendering loop var lastTimeMsec = null requestAnimationFrame(function animate(nowMsec) { // keep looping requestAnimationFrame(animate); // measure time lastTimeMsec = lastTimeMsec || nowMsec - 1000 / 60 var deltaMsec = Math.min(200, nowMsec - lastTimeMsec) lastTimeMsec = nowMsec // call each update function onRenderFcts.forEach(function(onRenderFct) { onRenderFct(deltaMsec / 1000, nowMsec / 1000) }) }) I am trying to remove the fps display but unable to get it. Adding : Is there any reference or link for the complete tutorial of AR js using three.js? I want to learn it. But, I am not getting a good tutorial.
11
00
00
00
00
00
How to move and rotate SCNode using ARKit and Gesture Recognizer? I am working on an AR based iOS app using ARKit(SceneKit). I used the Apple sample code https://developer.apple.com/documentation/arkit/handling_3d_interaction_and_ui_controls_in_augmented_reality as base for this. Using this i am able to move or rotate the whole Virtual Object. But i want to select and move/rotate a Child Node in Virtual object using user finger, similar to how we move/rotate the whole Virtual Object itself. I tried the below two links but it is only moving the child node in particular axis but not freely moving anywhere as the user moves the finger. ARKit - Drag a node along a specific axis (not on a plane) Dragging SCNNode in ARKit Using SceneKit Also i tried replacing the Virtual Object which is a SCNReferenceNode with SCNode so that whatever functionality present for existing Virtual Object applies to Child Node as well, it is not working. Can anyone please help me on how to freely move/rotate not only the Virtual Object but also the child node of a Virtual Object? Please find the code i am currently using below, let tapPoint: CGPoint = gesture.location(in: sceneView) let result = sceneView.hitTest(tapPoint, options: nil) if result.count == 0 { return } let scnHitResult: SCNHitTestResult? = result.first movedObject = scnHitResult?.node //.parent?.parent let hitResults = self.sceneView.hitTest(tapPoint, types: .existingPlane) if !hitResults.isEmpty{ guard let hitResult = hitResults.last else { return } movedObject?.position = SCNVector3Make(hitResult.worldTransform.columns.3.x, hitResult.worldTransform.columns.3.y, hitResult.worldTransform.columns.3.z) }
11
00
00
00
00
00
How to measure the dimensions of a 3d object using ARKit or Apple Vision? Using the iPhone camera (and presumably some combination of ARKit, Apple Vision, CoreML/mlmodels, etc), how would you measure the dimensions (width, height, depth) of an object? The object being something small that sits on a desk Using mlmodel, you can train ML to perform object detection of specific objects. That would only allow you to draw a box around your detected object on the 2d screen. I want to be able to use the phone camera to look at and perhaps move around the object to determine the dimensions/actual size of the object. I've read about edge detection or shape detection, but I don't think I need image to image Holistically-Nested Edge Detection. ARKit excels at using the phone's hardware to measure small scale distances accurately enough. One potential method would be to have a known-size reference object (like a quarter) next to the object to compare, but that would introduce complications & hassles. Ideally, I'd like to point the iPhone camera at the small object on the desk, maybe look around (rotate around the object a bit) and have a ballpark set of measurements of the object size & have ARAnchor(s) for the object's actual location.
00
00
11
00
00
00
Android ArCore Sceneform API. How to change textures in runtime? The server has more than 3000 models and each of them has several colors of material. I need to load separately models and textures and set textures depending on the user's choice. How to change baseColorMap, normalMap, metallicMap, roughnessMap in runtime? after modelRenderable.getMaterial().setTexture("normalMap", normalMap.get()); nothing happens I'm doing something wrong. There is no information in documentation for that.
11
00
00
11
00
00
How can I change the height of a cylinder with "a-animation"? I want to increase the height of a cylinder on aframe as an animation. How can this be achieved?
00
00
00
11
00
00
Play large video on Android without lagging I'm developing an AR game. In a particular situation, I want to play a video, which user selected from phone gallery, in a part of scene in unity(for example on a Plane). I test unity Video Player, but that would lag so much when video size is more than 100 Mbyte and even textures doesn't display, just I can hear video sound. What should I do now? Should I write a java native plugin which stream video in java and set textures in unity? Thanks and Sorry for bad English. videoPlayer.playOnAwake = false; videoPlayer.source = VideoSource.Url; videoPlayer.url = url; videoPlayer.Prepare(); //Wait until video is prepared while (!videoPlayer.isPrepared) { Debug.Log("Preparing Video"); yield return null; } //Assign the Texture from Video to RawImage to be displayed image.texture = videoPlayer.texture; //Play Video videoPlayer.Play(); I also assigned AudioSource in editor. All things works fine just if video size is lower than 10 Mbyte.
11
00
00
00
00
11
Oculus Go Developer Mode - Installed apk successfully but it does not show in Oculus I got a Success message after doing adb install of the apk. However, when I go to Library > Unknown Sources, I do not see my app. The XR settings for the project has Oculus as the SDK. What am I missing?
00
00
00
00
11
00
Unity: Speech Recognition is not supported on this machine I have a problem with an Unity project. More specific with a HoloLens Application. I have added the Keyword recognition from the MixedReality-Toolkit for unity. Until now everything worked fine. Today I had to reset my Laptop and install everything new. After the reset everything worked fine, but after activating my Windows 10 - Educational license in order to enable Hyper-V I get now the following Error message: UnityException: Speech recognition is not supported on this machine. UnityEngine.Windows.Speech.PhraseRecognizer.CreateFromKeywords (System.String[] keywords, UnityEngine.Windows.Speech.ConfidenceLevel minimumConfidence) (at C:/buildslave/unity/build/artifacts/generated/Metro/runtime/SpeechBindings.gen.cs:47) UnityEngine.Windows.Speech.KeywordRecognizer..ctor (System.String[] keywords, UnityEngine.Windows.Speech.ConfidenceLevel minimumConfidence) (at C:/buildslave/unity/build/Runtime/Export/Windows/Speech.cs:221) MixedRealityToolkit.InputModule.InputSources.SpeechInputSource.Start () (at Assets/HoloToolkit/InputModule/Scripts/InputSources/SpeechInputSource.cs:72) On other devices (I have tested it with Windows 10 Home and a bootable USB-Stick on the Laptop with Windows 10 Educational) the Voice recognition still works. Does someone know how to solve this error? Edit: Still have this problem. Does somebody found a new solution for this problem?
11
00
00
00
00
00
Sceneform Collisions I'm trying to play a sound and then destroy two objects of two different types when they collide using Sceneform. I see that Sceneform has a collision api (https://developers.google.com/ar/reference/java/com/google/ar/sceneform/collision/package-summary), but I can't figure out how to act on a collision. I've tried extending a collision shape, overriding the shapeIntersection methods, and setting the Collision Shape property for each node, but that doesn't seem to do anything. There doesn't seem to be any sample code, but the documentation mentions collision listeners. So far, I've just been doing checks the brute force way, but I was hoping there was a more efficient way. EDIT: I've been trying to do something like this: public class PassiveNode extends Node{ public PassiveNode() { PassiveCollider passiveCollider = new PassiveCollider(this); passiveCollider.setSize(new Vector3(1, 1, 1)); this.setCollisionShape(passiveCollider); } public class PassiveCollider extends Box { public Node node; // Remeber Node this is attached to public PassiveCollider(Node node) { this.node = node; } } } public class ActiveNode extends Node { private Node node; private Node target; private static final float metersPerSecond = 1F; public ActiveNode(Node target) { node = this; this.target = target; BallCollision ball = new BallCollision(); ball.setSize(new Vector3(1, 1, 1)); this.setCollisionShape(ball); } @Override public void onUpdate(FrameTime frameTime) { super.onUpdate(frameTime); Vector3 currPos = this.getWorldPosition(); Vector3 targetPos = target.getWorldPosition(); Vector3 direction = Vector3.subtract(targetPos, currPos).normalized(); this.setWorldPosition(Vector3.add(currPos, direction.scaled(metersPerSecond * frameTime.getDeltaSeconds()))); } private class BallCollision extends Box { @Override protected boolean boxIntersection(Box box) { if (box instanceof PassiveNode.PassiveCollider) { //Play Sound node.setEnabled(false); ((PassiveNode.PassiveCollider) box).node.setEnabled(false); return true; } return false; } } } Where a PassiveNode lies on a plane and the ActiveNode is "thrown" from the camera to a point on a plane.
11
00
00
00
00
00
Vuforia + Unity build for iPhone X iOS 11.4 black screen I'm using: XCode 9.4 Unity 2018.1 Vuforia 7.1.31 This problem is strange. I manage to build and run this project using the same XCode for iPhone 6 running iOS 11.4. With the same configuration, it only shows me blank screen on iPhone X iOS 11.4. Anybody have the same experience with me?
11
00
00
00
00
00
remove nodes from scene after use ARSCNView I am making an app using ARKit to measure between two points. The goal is to be able to measure length, store that value, then measure width and store. The problem I am having is disposing of the nodes after I get the measurement. Steps so far: 1) added a button with a restartFunction. this worked to reset the measurements but did not remove the spheres from scene, and also made getting the next measurement clunky. 2) set a limit on > 2 nodes. This functionaly works best. But again the spheres just stay floating in the scene. Here is a screen shot of the best result I have had. @objc func handleTap(sender: UITapGestureRecognizer) { let tapLocation = sender.location(in: sceneView) let hitTestResults = sceneView.hitTest(tapLocation, types: .featurePoint) if let result = hitTestResults.first { let position = SCNVector3.positionFrom(matrix: result.worldTransform) let sphere = SphereNode(position: position) sceneView.scene.rootNode.addChildNode(sphere) let tail = nodes.last nodes.append(sphere) if tail != nil { let distance = tail!.position.distance(to: sphere.position) infoLabel.text = String(format: "Size: %.2f inches", distance) if nodes.count > 2 { nodes.removeAll() } } else { nodes.append(sphere) } } } I am new to Swift (coding in general) and most of my code has come from piecing together tutorials.
11
00
00
00
00
00
Does Flutter support virtual or augmented reality? I have to make an app that uses virtual reality, so should I drop the idea of using Flutter?
00
11
00
00
00
00
How can I use my webcam inside an Azure windows server virtual machine? I have a Windows server 2016 vm on Azure and I am trying to do some work in Augmented Reality using Vuforia and Unity. An essential part of this is being able to use my webcam but once I am inside the VM it doesnt recognise my integrated webcam. I tried to connect a webcam through USB as well but this doesn't work either. Is this an impossible task...getting a server instance to recognise webcams on my laptop or is it actually possible? Any help would be appreciated.
00
00
00
00
11
00

Dataset Card for "pc"

More Information needed

Downloads last month
2
Edit dataset card