question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
Are there any tools for windows like that *nix world has? I am looking for something like Chef or Puppet. I have found cfEngine but it still looks very *nix centric. Ideally it would be open source, and command line driven. The idea is to put together an automated infrastructure with windows based servers. Our current IT department does not allow non-windows servers.
Chef is supported on Windows by Opscode. While we don't run Windows for any of our infrastructure, we do have developers who are continually improving our Windows support. We also get community contributions, and most of the early phase Windows functionality for Chef was contributed by the community. Important: Opscode now provides an MSI installer for Chef on Windows. This makes it easier than ever to get Chef and Ruby installed on Windows. While we have a lot of Unix/Linux background across our teams, our intention is that Windows is treated as a first class citizen. 2012 will be a big year for Chef and Windows. Keep an eye on the Opscode blog for announcements. The following Chef Resources work on Windows: Environment Resource: sets windows environment variables User Group Mount File Gem Package Remote File Cookbook File Template Service Ruby Block Execute That is, these are resources included in Chef itself. As Chef is extensible with cookbooks, many more resources are added through a variety of Windows specific cookbooks. Read on for more information. You can get started with using Chef and Windows here: http://wiki.opscode.com/display/chef/Fast+Start+Guide+for+Windows Originally, Doug MacEchern wrote some cookbooks to do a number of things to automate windows, too. https://github.com/dougm/site-cookbooks/tree/master/windows This information and more available on the Chef Wiki: http://wiki.opscode.com/display/chef/Installation+on+Windows Update The following cookbook adds new resources to Chef to manage Windows: http://community.opscode.com/cookbooks/windows It is an update/rewrite of Doug's fine resources from his repository linked above. Documentation is available on the Chef Wiki. The following cookbook deploys PowerShell and provides a resource to run PowerShell commands/scripts directly in Chef Recipes: http://community.opscode.com/cookbooks/powershell Documentation is available in the README.md included in the cookbook tarball. Additional cookbooks for installing 7-zip, managing IIS and SQL Server have been added. Our "database" cookbook has been extended with a resource/provider for managing SQL Server databases and users (with the tds rubygem). The knife-windows plugin for knife adds functionality for interacting with Windows systems to provision them with Chef installation. Update: We have now added File ACL support for Windows to Chef, for all the usual file/directory suspects.
CFEngine
4,910,034
70
I'm in the process of evaluating if and how a CF .NET enterprise application can be ported to run on Android devices. The application on Windows Mobile phones are run in kiosk mode where the application autostart in fullscreen-mode after booting and with the users unable to accidentally or willingly access any other parts of the phone. Is it possible on Android to have only one application autostart after booting and prevent users from accidentally (or willingly) access any other parts of the Android device?
You can autostart applications on boot by listening to the android.intent.action.BOOT_COMPLETED intent in a BroadcastReceiver and start your Activity from there. In the Activity you can register yourself as the new default homescreen[1] and handle the keys. I think there are some instances that you can't handle without modifying the framework (like longpress on Home to show currently active Applications) - I could also be mistaken though. But for a prototype that could be sufficient. Have fun tinkering! [1]: <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.HOME" /> <category android:name="android.intent.category.DEFAULT" /> </intent-filter>
kiosk
2,068,084
121
I need to disable Home and other system buttons in my Android application. Example: MX Player (see at Google Play) - you can press "lock" icon at player screen and it locks all hardware and software system buttons. It works fine WITHOUT ROOTING. I tested it on some devices with different Android versions. I tried to disassemble Kids Lock (plugin) but has no clue how it works yet. I need same solution like the Kids Lock (plugin) for MX Player: - disable Home, Back and all other system buttons. Any suggestions?
First of, please think long and hard if you really want to disable the Home button or any other button for that matter (e.g. the Back button), this is not something that should be done (at least most of the times, this is a bad design). I can speak only for myself, but if I downloaded an app that doesn't let me do something like clicking an OS button, the next thing I do is uninstall that app and leave a very bad review. I also believe that your app will not be featured on the App Store. Now... Notice that MX Player is asking permission to draw on top of other applications: Since you cannot override the Home button on Android device (at least no in the latest OS versions). MX Player draws itself on top of your launcher when you "lock" the app and clicks on the Home button. To see an example of that is a bit more simple and straight forward to understand, you can see the Facebook Messenger App. As I was asked to provide some more info about MX Player Status Bar and Navigation Bar "overriding", I'm editing my answer to include these topics too. First thing first, MX Player is using Immersive Full-Screen Mode (DevBytes Video) on KitKat. Android 4.4 (API Level 19) introduces a new SYSTEM_UI_FLAG_IMMERSIVE flag for setSystemUiVisibility() that lets your app go truly "full screen." This flag, when combined with the SYSTEM_UI_FLAG_HIDE_NAVIGATION and SYSTEM_UI_FLAG_FULLSCREEN flags, hides the navigation and status bars and lets your app capture all touch events on the screen. When immersive full-screen mode is enabled, your activity continues to receive all touch events. The user can reveal the system bars with an inward swipe along the region where the system bars normally appear. This clears the SYSTEM_UI_FLAG_HIDE_NAVIGATION flag (and the SYSTEM_UI_FLAG_FULLSCREEN flag, if applied) so the system bars become visible. This also triggers your View.OnSystemUiVisibilityChangeListener, if set. However, if you'd like the system bars to automatically hide again after a few moments, you can instead use the SYSTEM_UI_FLAG_IMMERSIVE_STICKY flag. Note that the "sticky" version of the flag doesn't trigger any listeners, as system bars temporarily shown in this mode are in a transient state. Second: Hiding the Status Bar Third: Hiding the Navigation Bar Please note that although using immersive full screen is only for KitKat, hiding the Status Bar and Navigation Bar is not only for KitKat. I don't have much to say about the 2nd and 3rd, You get the idea I believe, it's a fast read in any case. Just make sure you pay close attention to View.OnSystemUiVisibilityChangeListener. I added a Gist that explains what I meant, it's not complete and needs some fixing but you'll get the idea. https://gist.github.com/Epsiloni/8303531
kiosk
17,549,478
82
We are using Chrome in kiosk mode and accidentally users are causing the application to zoom with the recent addition of pinch zoom support. They then think they've broken it and simply walk away leaving the application (and subsequently a 55" touch screen) in a broken state. Now the only thing to work has been stopping event propagation for touch events over 2 points. Issues with that are we can't do multitouch apps in that case and if you act fast the browser reacts before javascript. Which in our tests still happen on accident by users. I've done the Meta tags, they do not work. Honestly I wish I could disable chrome zooming at all but I cant find a way to do that. How can I stop the browser from zooming?
We've had a similar problem, it manifests as the browser zooming but javascript receiving no touch event (or sometimes just a single point before zooming starts). We've found these possible (but possibly not long-term) solutions: 1. Disable the pinch / swipe features when using kiosk mode If these command-line settings remain in Chrome, you can do the following: chrome.exe --kiosk --incognito --disable-pinch --overscroll-history-navigation=0 --disable-pinch - disables the pinch-to-zoom functionality --overscroll-history-navigation=0 - disables the swipe-to-navigate functionality 2. Disable pinch zoom using the Chrome flags chrome://flags/#enable-pinch Navigate to the URL chrome://flags/#enable-pinch in your browser and disable the feature. The pinch zoom feature is currently experimental but turned on by default which probably means it will be force-enabled in future versions. If you're in kiosk mode (and control the hardware/software) you could probably toggle this setting upon installation and then prevent Chrome updates going forward. There is already a roadmap ticket for removing this setting at Chromium Issue 304869. The fact that the browser reacts before javascript can prevent it is definitely a bug and has been logged at the Chromium bug tracker. Hopefully it will be fixed before the feature is permanently enabled or fingers-crossed they'll leave it as a setting. 3. Disable all touches, whitelist for elements and events matching your app In all tests that we've conducted, adding preventDefault() to the document stops the zooming (and all other swipe/touch events) in Chrome: document.addEventListener('touchstart', function(event){ event.preventDefault(); }, {passive: false}); If you attach your touch-based functionality higher up in the DOM, it'll activate before it bubbles to the document's preventDefault() call. In Chrome it is also important to include the eventListenerOptions parameter because as of Chrome 51 a document-level event listener is set to {passive: true} by default. This disables normal browser features like swipe to scroll though, you would probably have to implement those yourself. If it's a full-screen, non-scrollable kiosk app, maybe these features won't be important.
kiosk
22,999,829
53
I am implementing a kiosk mode application and i have successfully made the application full-screen without status bar appearance post 4.3 but unable to hide status bar in 4.3 and 4.4 as status-bar appears when we swipe down at the top of the screen. I have tried to make it full screen by speciflying the full screen theme in manifest setting window Flags ie setFlags setSystemUiVisibility Possible duplicate but no concrete solution found Permanently hide Android Status Bar Finally the thing i want is, how to hide status bar permanently in an activity?? in android 4.3,4.4,5,6versions
We could not prevent the status appearing in full screen mode in kitkat devices, so made a hack which still suits the requirement ie block the status bar from expanding. For that to work, the app was not made full screen. We put a overlay over status bar and consumed all input events. It prevented the status from expanding. note: customViewGroup is custom class which extends any layout(frame,relative layout etc) and consumes touch event. to consume touch event override the onInterceptTouchEvent method of the view group and return true Updated <uses-permission android:name="android.permission.SYSTEM_ALERT_WINDOW"/> customViewGroup implementation Code : WindowManager manager = ((WindowManager) getApplicationContext() .getSystemService(Context.WINDOW_SERVICE)); WindowManager.LayoutParams localLayoutParams = new WindowManager.LayoutParams(); localLayoutParams.type = WindowManager.LayoutParams.TYPE_SYSTEM_ERROR; localLayoutParams.gravity = Gravity.TOP; localLayoutParams.flags = WindowManager.LayoutParams.FLAG_NOT_FOCUSABLE| // this is to enable the notification to recieve touch events WindowManager.LayoutParams.FLAG_NOT_TOUCH_MODAL | // Draws over status bar WindowManager.LayoutParams.FLAG_LAYOUT_IN_SCREEN; localLayoutParams.width = WindowManager.LayoutParams.MATCH_PARENT; localLayoutParams.height = (int) (50 * getResources() .getDisplayMetrics().scaledDensity); localLayoutParams.format = PixelFormat.TRANSPARENT; customViewGroup view = new customViewGroup(this); manager.addView(view, localLayoutParams);
kiosk
25,284,233
31
I am developing a kiosk and now in admin side. In order to go to the Admin, the user needs to tap the screen 5 times just in 3 seconds or else, nothing will happen.
Please read the comments in the code, it is quite straightforward import android.app.Activity; import android.os.Bundle; import android.view.MotionEvent; public class MainActivity extends Activity { private int tapCount = 0; private long tapCounterStartMillis = 0; //detect any touch event in the screen (instead of an specific view) @Override public boolean onTouchEvent(MotionEvent event) { int eventaction = event.getAction(); if (eventaction == MotionEvent.ACTION_UP) { //get system current milliseconds long time= System.currentTimeMillis(); //if it is the first time, or if it has been more than 3 seconds since the first tap ( so it is like a new try), we reset everything if (tapCounterStartMillis == 0 || (time-tapCounterStartMillis > 3000) ) { tapCounterStartMillis = time; tapCount = 1; } //it is not the first, and it has been less than 3 seconds since the first else{ // time-tapCounterStartMillis < 3000 tapCount ++; } if (tapCount == 5) { //do whatever you need } return true; } return false; }
kiosk
21,104,263
26
We are looking to print to a POS printer connected where apache is running. Due to design of the application, and deployment, printing should be done from Server (it should detect the order and send to different printers and different formats of printing...bill, kitchen orders, and so on...). For this reason and others (like access application from an iPad for example) we discard options like QZ-Print applet and needst o print directly server side. We searched a lot, and found that there are an extension called php-printer but seems outdated, and just works under WIndows. We followed this code: (http://mocopat.wordpress.com/2012/01/18/php-direct-printing-printer-dot-matrix-lx-300/) $tmpdir = sys_get_temp_dir(); # ambil direktori temporary untuk simpan file. $file = tempnam($tmpdir, 'ctk'); # nama file temporary yang akan dicetak $handle = fopen($file, 'w'); $condensed = Chr(27) . Chr(33) . Chr(4); $bold1 = Chr(27) . Chr(69); $bold0 = Chr(27) . Chr(70); $initialized = chr(27).chr(64); $condensed1 = chr(15); $condensed0 = chr(18); $corte = Chr(27) . Chr(109); $Data = $initialized; $Data .= $condensed1; $Data .= "==========================\n"; $Data .= "| ".$bold1."OFIDZ MAJEZTY".$bold0." |\n"; $Data .= "==========================\n"; $Data .= "Ofidz Majezty is here\n"; $Data .= "We Love PHP Indonesia\n"; $Data .= "We Love PHP Indonesia\n"; $Data .= "We Love PHP Indonesia\n"; $Data .= "We Love PHP Indonesia\n"; $Data .= "We Love PHP Indonesia\n"; $Data .= "--------------------------\n"; $Data .= $corte; fwrite($handle, $Data); fclose($handle); copy($file, "//localhost/KoTickets"); # Lakukan cetak unlink($file); And it works, but this sends plain text, and we need to send image (logo), and format a more cute bill. We tried creating a PDF and "sending" to the printer in the same way, but just prints blank. I found a library to work with network printers (escpos-php on github), but we need to work with USB printers too, to avoid our customers to change hardware. Some ideas how to achieve this? Thanks in advance.
Author of escpos-php here. If your printers do support ESC/POS (most thermal receipt printers seem to use some sub-set of it), then I think the driver will accommodate your use case: USB or network printing, logo, some formatting. Some of these are quite recent additions. USB printing escpos-php prints to a file pointer. On Linux, you can make the USB printer visible as a a file using the usblp driver, and then just fopen() it (USB receipt example, blog post about installing a USB printer on Linux). So printing "Hello world" on a USB printer is only slightly different to printing to a networked printer: <?php require __DIR__ . '/vendor/autoload.php'; use Mike42\Escpos\PrintConnectors\FilePrintConnector; use Mike42\Escpos\Printer; $connector = new FilePrintConnector("/dev/usb/lp0"); $printer = new Printer($connector); $printer -> text("Hello World!\n"); $printer -> cut(); $printer -> close(); Or, more like the code you are currently using successfully, you could write to a temp file and copy it: <?php require __DIR__ . '/vendor/autoload.php'; use Mike42\Escpos\PrintConnectors\FilePrintConnector; use Mike42\Escpos\Printer; /* Open file */ $tmpdir = sys_get_temp_dir(); $file = tempnam($tmpdir, 'ctk'); /* Do some printing */ $connector = new FilePrintConnector($file); $printer = new Printer($connector); $printer -> text("Hello World!\n"); $printer -> cut(); $printer -> close(); /* Copy it over to the printer */ copy($file, "//localhost/KoTickets"); unlink($file); So in your POS system, you would need a function which returns a file pointer based on your customer configuration and preferred destination. Receipt printers respond quite quickly, but if you have a few iPads making orders, you should wrap operations to each printer with a file lock (flock()) to avoid concurrency-related trouble. Also note that USB support on Windows is un-tested. Logo & Formatting Once you have figured out how you plan to talk to the printer, you can use the full suite of formatting and image commands. A logo can be printed from a PNG file like so: use Mike42\Escpos\EscposImage; $logo = EscposImage::load("foo.png"); $printer -> graphics($logo); And for formatting, the README.md and the example below should get you started. For most receipts, you only really need: selectPrintMode() to alter font sizes. setEmphasis() to toggle bold. setJustification() to left-align or center some text or images. cut() after each receipt. I would also suggest that where you are currently using an example that draws boxes like this: ========= | | ========= You could make use of the characters in IBM Code page 437 which are designed for drawing boxes that are supported by many printers- just include characters 0xB3 to 0xDA in the output. They aren't perfect, but it looks a lot less "text"-y. $box = "\xda".str_repeat("\xc4", 10)."\xbf\n"; $box .= "\xb3".str_repeat(" ", 10)."\xb3\n"; $box .= "\xc0".str_repeat("\xc4", 10)."\xd9\n"; $printer -> textRaw($box); Full example The below example is also now included with the driver. I think it looks like a fairly typical store receipt, formatting-wise, and could be easily adapted to your kitchen scenario. Scanned output: PHP source code to generate it: <?php require __DIR__ . '/vendor/autoload.php'; use Mike42\Escpos\Printer; use Mike42\Escpos\EscposImage; use Mike42\Escpos\PrintConnectors\FilePrintConnector; /* Open the printer; this will change depending on how it is connected */ $connector = new FilePrintConnector("/dev/usb/lp0"); $printer = new Printer($connector); /* Information for the receipt */ $items = array( new item("Example item #1", "4.00"), new item("Another thing", "3.50"), new item("Something else", "1.00"), new item("A final item", "4.45"), ); $subtotal = new item('Subtotal', '12.95'); $tax = new item('A local tax', '1.30'); $total = new item('Total', '14.25', true); /* Date is kept the same for testing */ // $date = date('l jS \of F Y h:i:s A'); $date = "Monday 6th of April 2015 02:56:25 PM"; /* Start the printer */ $logo = EscposImage::load("resources/escpos-php.png", false); $printer = new Printer($connector); /* Print top logo */ $printer -> setJustification(Printer::JUSTIFY_CENTER); $printer -> graphics($logo); /* Name of shop */ $printer -> selectPrintMode(Printer::MODE_DOUBLE_WIDTH); $printer -> text("ExampleMart Ltd.\n"); $printer -> selectPrintMode(); $printer -> text("Shop No. 42.\n"); $printer -> feed(); /* Title of receipt */ $printer -> setEmphasis(true); $printer -> text("SALES INVOICE\n"); $printer -> setEmphasis(false); /* Items */ $printer -> setJustification(Printer::JUSTIFY_LEFT); $printer -> setEmphasis(true); $printer -> text(new item('', '$')); $printer -> setEmphasis(false); foreach ($items as $item) { $printer -> text($item); } $printer -> setEmphasis(true); $printer -> text($subtotal); $printer -> setEmphasis(false); $printer -> feed(); /* Tax and total */ $printer -> text($tax); $printer -> selectPrintMode(Printer::MODE_DOUBLE_WIDTH); $printer -> text($total); $printer -> selectPrintMode(); /* Footer */ $printer -> feed(2); $printer -> setJustification(Printer::JUSTIFY_CENTER); $printer -> text("Thank you for shopping at ExampleMart\n"); $printer -> text("For trading hours, please visit example.com\n"); $printer -> feed(2); $printer -> text($date . "\n"); /* Cut the receipt and open the cash drawer */ $printer -> cut(); $printer -> pulse(); $printer -> close(); /* A wrapper to do organise item names & prices into columns */ class item { private $name; private $price; private $dollarSign; public function __construct($name = '', $price = '', $dollarSign = false) { $this -> name = $name; $this -> price = $price; $this -> dollarSign = $dollarSign; } public function __toString() { $rightCols = 10; $leftCols = 38; if ($this -> dollarSign) { $leftCols = $leftCols / 2 - $rightCols / 2; } $left = str_pad($this -> name, $leftCols) ; $sign = ($this -> dollarSign ? '$ ' : ''); $right = str_pad($sign . $this -> price, $rightCols, ' ', STR_PAD_LEFT); return "$left$right\n"; } }
kiosk
25,973,046
24
I am modifying the AOSP source code because my app needs to run in a kiosk environment. I want Android to boot directly into the app. I've excluded launcher2 from generic_no_telephony.mk, and added the app there. Now Android prompts me all the time to choose default launcher. The two choices that are available on the pop-up: Home Sample My app. How can I exclude the Android Home Sample Launcher? Or is there another way to set the default launcher in an AOSP build?
Instead of modifying the AOSP make files (which is annoying because then you need to track your changes) it is easier to add a LOCAL_OVERRIDES_PACKAGES line to your app's make file. For instance: LOCAL_OVERRIDES_PACKAGES := Launcher2 Launcher3 added to your Android.mk file will ensure that those packages are not added to any build where this package is added. Following that, you should do a make installclean and then start your build the same way you always make your build. The make installclean is important to remove the packages that are left behind by the previous build. I also just found a nice answer to how to do this in another question, see: How would I make an embedded Android OS with just one app?
kiosk
22,911,156
17
So, I need to build a kiosk type of application for use in an internet cafe. The app needs to load and display some options of things to do. One option is to launch IE to surf. Another option is to play a game. I've been reading that what I probably want to do is replace the windows shell and have it run my app when the OS loads. I'd also have to disable the task manager. This is a multipart question. Can I use dotnet to create this? What OS do I have to use? I keep seeing windows xp embedded pop up in my readings Will there be any issues with the app occasionally loading IE? Are there any other tasks that I should be aware of when doing this? Other than task manager and replacing the shell. If I can do it in c#, is there anything in particular that I should know about? Maybe my forms have to inherit certain classes, etc...
You should check out Microsoft Steady State It has plenty features and are free to use. Windows SteadyState Features Whether you manage computers in a school computer lab or an Internet cafe, a library, or even in your home, Windows SteadyState helps make it easy for you to keep your computers running the way you want them to, no matter who uses them. Windows Disk Protection – Help protect the Windows partition, which contains the Windows operating system and other programs, from being modified without administrator approval.Windows SteadyState allows you to set Windows Disk Protection to remove all changes upon restart, to remove changes at a certain date and time, or to not remove changes at all. If you choose to use Windows Disk Protection to remove changes, any changes made by shared users when they are logged on to the computer are removed when the computer is restarted User Restrictions and Settings – The user restrictions and settings can help to enhance and simplify the user experience. Restrict user access to programs, settings, Start menu items, and options in Windows. You can also lock shared user accounts to prevent changes from being retained from one session to the next. User Account Manager – Create and delete user accounts. You can use Windows SteadyState to create user accounts on alternative drives that will retain user data and settings even when Windows Disk Protection is turned on. You can also import and export user settings from one computer to another—saving valuable time and resources. Computer Restrictions – Control security settings, privacy settings, and more, such as preventing users from creating and storing folders in drive C and from opening Microsoft Office documents from Internet Explorer®. Schedule Software Updates – Update your shared computer with the latest software and security updates when it is convenient for you and your shared users. Download: http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=d077a52d-93e9-4b02-bd95-9d770ccdb431
kiosk
3,581,059
14
I have a kiosk mode application which hides all traces of the System UI (Notification bar and navigation buttons). On versions of Android pre-Lollipop the following works fine (as root): service call activity 42 s16 com.android.systemui In Lollipop however, this makes the screen completely black as well as hiding the System UI. For this reason it cannot be used. Does anyone know of a workaround for this? I have tried the Device Owner/Admin solution for Screen Pinning, but unfortunately this is not acceptable because it does not hide the System UI entirely, but leaves the back button visible when swiping form the bottom of the screen.
If the device is rooted you could disable the systemui pm disable-user com.android.systemui and then the device-owner method works fine. This method should not be used if the device runs other apps, because if your app crashes the systemui might be disabled and the user can't interact with the device. <?xml version='1.0' encoding='utf-8' standalone='yes' ?> &device-owner package="com.mycompany" name="*mycompany" />
kiosk
27,942,053
14
How can I programmatically enable/disable an android screen reader service such as TalkBack? I am developing a kiosk type application that will be installed on an Android device that will be loaned to visitors while they visit a particular museum. (We are still in the process of determining what device we will use.) The plan is to only allow users to use our app and not have access to the android settings application. However, we'd like to allow users to configure some accessibility settings. When they are finished with the device, we need to restore all the settings to our defaults. The discussion at the link below has many suggesting launching Android's Settings app. But we don't want users accessing many other settings. How to Programmatically Enable/Disable Accessibility Service in Android
Only system apps can enable/disable accessibility service programatically. System apps can directly write in settings secure db to start accessibility service. Settings.Secure.putString(getContentResolver(),Settings.Secure.ENABLED_ACCESSIBILITY_SERVICES, "com.packagename/com.packagename.componentname"); Following permission is required to write in settings secure db: <uses-permission android:name="android.permission.WRITE_SECURE_SETTINGS" /> For non system apps, only way to start accessibility service is direct them to accessibility settings screen via intent and let user manually start the service : Intent intent = new Intent(Settings.ACTION_ACCESSIBILITY_SETTINGS);
kiosk
38,360,198
14
I have chrome opening in kiosk mode - I added the --kiosk flag to the chrome shortcut which works as expected. The kiosk allows browsing of our intranet and the internet. I realise I can use javascript to redirect pages on our intranet, but what about the internet? We don't want people fpr example browsing to YouTube and then walking away. We would like to have the browser re-direct to www.MyDomain.com after x minutes of inactivity. I have tried Kiosk here which does exactly what we require but the swipe left/right gestures don't seem to work for page navigation (already contacted the developer via github). Any suggestions?
I managed to find an answer to this question on another site. Ended up using a chrome extension called Idle Reset. Hopefully it helps somebody else.
kiosk
33,284,153
12
I wish to set up what is usually called a Kiosk, running Firefox locked down to our own specific home page (and links from there). The base operating system is CentOs 5 (i.e. just like RedHat Enterprise 5). Ideally I want Firefox to start full screen (and I have installed the full-fullscreen addon to help with this), and to be locked as such (i.e. F11 does not work). I need to be able to install this system using one or more rpm files. I have tested my fullscreen Firefox setup rpm under Gnome, and it works fine - my Gnome desktop is 1024x768, and the selected home page comes up exactly filling the screen - looks great. However, I do not want to bother with a desktop environment (like Gnome or KDE), just run Firefox as the sole X client program, with a fixed screen size of 1024x768. I have built rpms to install X, configure it to run at 1024x768, and fire up X automatically from an autologin using shell scripts. My main autologon script contains this: startx ~/client/xClient.sh -- :1 & xClient.sh contains this: while [ true ] do firefox done My problem is that Firefox does not come up full screen under this setup. The firefox window is smaller than the screen, and the top left corner is off the screen - this means the web page gets scrollbars, the top and left of the page does not show, and there is a black area along the bottom and right. Does anyone know the reason for this behaviour? What solutions can you suggest? I suppose, if necessary, I could install Gnome on the machine, and then try to lock it down - but it seems silly to add something as complex as Gnome just to get the window to appear the right size, and in the right place! Plus there is the extra task of trying to lock Gnome down so the users can't do anything else with the machine. If you think this question should not be on Stack Overflow, please tell me where it should go. (I think writing rpm and shell scripts are programming, but maybe they don't count? If not, sorry!)
You have 2 options. You install a kiosk plug-in, that allows you to start firefox automatically in full screen mode (amongst other things). One example would be R-kiosk Or you skip firefox and create a xul application that does what you want. You can find a sample application here. And you can find full screen code (not tested) here.
kiosk
9,586,290
11
What exactly does kiosk: true in the BrowserWindow config of a new ElectronJS window do? The documentation just states that the parameter indicates, that the window is in 'kiosk' mode. I was unable to find information on what this means.
Basically, Kiosk mode is a Windows operating system (OS) feature that only allows one application to run. Kiosk mode is a common way to lock down a Windows device when that device is used for a specific task or used in a public setting. So in electron kiosk mode, we'd have the ability to lock down our application to a point that users are restricted to the actions that we want them to perform. Also, the browser would merely act as our canvas with exactly defined capabilities and doesn't get into our way. And this is why you want to use Electron!
kiosk
70,456,451
11
I have a homemade Sinatra application for which I intend to use Heroku to host it. I use foreman and shotgun in development, with the following Procfile: web: shotgun config.ru -s thin -o 0.0.0.0 -p $PORT -E $RACK_ENV It works great with both development and production. But the thing is, I don't want to use shotgun in production since it's too slow. Can we use separate Procfile configurations for both dev and prod?
Use multiple Procfiles and specify -f or --procfile running option to select one: In dev (Procfile.dev contains your shotgun web process): foreman start -f Procfile.dev In production, foreman start will pick up the normal Procfile. Alternatively, you could create a bin directory in your app with a script to start the appropriate web server depending on $RACK_ENV (an idea I found in a comment made by the creator of Foreman, so worth considering).
Foreman
11,592,798
78
When I run foreman I get the following: > foreman start 16:47:56 web.1 | started with pid 27122 Only if I stop it (via ctrl-c) it shows me what is missing: ^CSIGINT received 16:49:26 system | sending SIGTERM to all processes 16:49:26 web.1 | => Booting Thin 16:49:26 web.1 | => Rails 3.0.0 application starting in development on http://0.0.0.0:5000 16:49:26 web.1 | => Call with -d to detach 16:49:26 web.1 | => Ctrl-C to shutdown server 16:49:26 web.1 | >> Thin web server (v1.3.1 codename Triple Espresso) 16:49:26 web.1 | >> Maximum connections set to 1024 16:49:26 web.1 | >> Listening on 0.0.0.0:5000, CTRL+C to stop 16:49:26 web.1 | >> Stopping ... 16:49:26 web.1 | Exiting 16:49:26 web.1 | >> Stopping ... How do I fix it?
I’ve been able to resolve this issue by 2 different ways: From https://github.com/ddollar/foreman/wiki/Missing-Output: If you are not seeing any output from your program, there is a likely chance that it is buffering stdout. Ruby buffers stdout by default. To disable this behavior, add this code as early as possible in your program: # ruby $stdout.sync = true By installing foreman via the heroku toolbelt package But I still don’t know what’s happening nor why this 2 ways above resolved the issue…
Foreman
8,717,198
55
Can you comment out lines in a .env file read by foreman?
FWIW, '#' appears to work as a comment character. It at least has the effect of removing unwanted environment declarations. It might be declaring others starting with a #, but... it still works. EG DATABASE_URL=postgres://mgregory:@localhost/mgregory #DATABASE_URL=mysql://root:secret@localhost:3306/cm_central results in postgres being used by django when started by foreman with this .env file, which is what I wanted.
Foreman
26,713,508
50
I want to be able to set environment variables in my Django app for tests to be able to run. For instance, my views rely on several API keys. There are ways to override settings during testing, but I don't want them defined in settings.py as that is a security issue. I've tried in my setup function to set these environment variables, but that doesn't work to give the Django application the values. class MyTests(TestCase): def setUp(self): os.environ['TEST'] = '123' # doesn't propogate to app When I test locally, I simply have an .env file I run with foreman start -e .env web which supplies os.environ with values. But in Django's unittest.TestCase it does not have a way (that I know) to set that. How can I get around this?
The test.support.EnvironmentVarGuard is an internal API that might be changed from version to version with breaking (backward incompatible) changes. In fact, the entire test package is internal use only. It was explicitly stated on the test package documentation page that it's for internal testing of core libraries and NOT a public API. (see links below) You should use patch.dict() in python's standard lib unittest.mock. It can be used as a context manager, decorator or class decorator. See example code below copied from the official Python documentation. import os from unittest.mock import patch with patch.dict('os.environ', {'newkey': 'newvalue'}): print(os.environ['newkey']) # should print out 'newvalue' assert 'newkey' in os.environ # should be True assert 'newkey' not in os.environ # should be True Update: for those who doesn't read the documentation thoroughly and might have missed the note, read more test package notes at https://docs.python.org/2/library/test.html or https://docs.python.org/3/library/test.html
Foreman
31,195,183
46
I have been attempting to complete this tutorial, but have run into a problem with the foreman start line. I am using a windows 7, 64 bit machine and am attempting to do this in the git bash terminal provided by the Heroku Toolbelt. When I enter foreman start I receive: sh.exe": /c/Program Files (x86)/Heroku/ruby-1.9.2/bin/foreman: "c:/Program: bad interpreter: No such file or directory So I tried entering the cmd in git bash by typing cmd and then using foreman start (similar to a comment on one of the answers to this question suggests). This is what that produced: Bad file descriptor c:/Program Files (x86)/Heroku/ruby-1.9.2/lib/ruby/gems/1.9.1/gems/foreman-0.62.0 /lib/foreman/engine.rb:377:in `read_nonblock' c:/Program Files (x86)/Heroku/ruby-1.9.2/lib/ruby/gems/1.9.1/gems/foreman-0.62.0 /lib/foreman/engine.rb:377:in `block (2 levels) in watch_for_output' c:/Program Files (x86)/Heroku/ruby-1.9.2/lib/ruby/gems/1.9.1/gems/foreman-0.62.0 /lib/foreman/engine.rb:373:in `loop' c:/Program Files (x86)/Heroku/ruby-1.9.2/lib/ruby/gems/1.9.1/gems/foreman-0.62.0 /lib/foreman/engine.rb:373:in `block in watch_for_output' 21:06:08 web.1 | exited with code 1 21:06:08 system | sending SIGKILL to all processes I have no clue what the second set of errors is trying to tell me, since the file location it seems to claim engine.rb is running from does not even exist on my computer. I have looked at other answers to similar problems, however I am not receiving similar errors and so do not believe a solution to my problem currently exists.
I had this problem. I fixed it by uninstalling version 0.62 of the foreman gem and installing 0.61. gem uninstall foreman gem install foreman -v 0.61
Foreman
15,399,637
41
I think this is a little, easy question! I'm using .env file to keep all my environment variables, and i'm using foreman. Unfortunately, these environment variables are not being loaded when running rails console rails c so, i'm now loading them manually after running the console, which is not the best way. I'd like to know if there any better way for that.
About a year ago, the "run" command was added to foreman ref: https://github.com/ddollar/foreman/pull/121 You can use it as follow: foreman run rails console or foreman run rake db:migrate
Foreman
15,370,814
34
I have this simple Procfile web: myapp myapp is in the path, but the processes home directory should be ./directory/. How can I specify in the Procfile where the process is to be started? https://github.com/ddollar/foreman/pull/101 doesn't help because it assumes, that this working directory should be the same for every process specified by the Procfile
The shell is the answer. It's as simple as web: sh -c 'cd ./directory/ && exec appname'
Foreman
13,284,310
27
I installed redis this afternoon and it caused a few errors, so I uninstalled it but this error is persisting when I launch the app with foreman start. Any ideas on a fix? foreman start 22:46:26 web.1 | started with pid 1727 22:46:26 web.1 | 2013-05-25 22:46:26 [1727] [INFO] Starting gunicorn 0.17.4 22:46:26 web.1 | 2013-05-25 22:46:26 [1727] [ERROR] Connection in use: ('0.0.0.0', 5000)
Just type sudo fuser -k 5000/tcp .This will kill all process associated with port 5000
Foreman
16,756,624
22
A web app I am writing in JavaScript using node.js. I use Foreman, but I don't want to manually restart the server every time I change my code. Can I tell Foreman to reload the entire web app before handling an HTTP request (i.e. restart the node process)?
Here's an adjusted version of Pendlepants solution. Foreman looks for an .env file to read environment variables. Rather than adding a wrapper, you can just have Foreman switch what command it uses to start things up: In .env: WEB=node app.js In dev.env: WEB=supervisor app.js In your Procfile: web: $WEB By default, Foreman will read from .env (in Production), but in DEV just run this: foreman start -e dev.env
Foreman
9,131,496
21
I am trying to export my application to another process management format/system (specifically, upstart). In doing so, I have come across a number of roadblocks, mostly due to lacking documentation. As a non-root user, I ran the following command (as shown here): -bash> foreman export upstart /etc/init ERROR: Could not create: /etc/init I "could not create" the directory due to inadequate permissions, so I used sudo: -bash> sudo foreman export upstart /etc/init Password: ERROR: Could not chown /var/log/app to app I "could not chown... to app" because there is no user named app. Where is app coming from? How should I use forman to export to upstart?
app is default for both the name of the app and the name of the user the application should be run as when the corresponding options (--app and --user) are not used. See the foreman man page for the available options, but note that at the time of this writing the official synopsis did not include [options]: foreman export [options] <format> [location] Example: -bash> sudo foreman export --app foo --user bar upstart /etc/init Password: [foreman export] writing: foo.conf [foreman export] writing: foo-web.conf [foreman export] writing: foo-web-1.conf [foreman export] writing: foo-worker.conf [foreman export] writing: foo-worker-1.conf Result: -bash> l /etc/init/ total 80 drwxr-xr-x 12 root wheel 408 20 Oct 09:31 . drwxr-xr-x 94 root wheel 3196 20 Oct 08:05 .. -rw-r--r-- 1 root wheel 236 20 Oct 09:31 foo-web-1.conf -rw-r--r-- 1 root wheel 41 20 Oct 09:31 foo-web.conf -rw-r--r-- 1 root wheel 220 20 Oct 09:31 foo-worker-1.conf -rw-r--r-- 1 root wheel 41 20 Oct 09:31 foo-worker.conf -rw-r--r-- 1 root wheel 315 20 Oct 09:31 foo.conf -bash> l /var/log/foo/ total 0 drwxr-xr-x 2 bar wheel 68 20 Oct 09:31 . drwxr-xr-x 45 root wheel 1530 20 Oct 09:31 ..
Foreman
12,990,842
19
I'm following the heroku tutorial for Heroku/Facebook integration (but I suspect this issue has nothing to do with facebook integration) and I got stuck on the stage where I was supposed to start foreman (I've installed the Heroku installbelt for windows, which includes foreman): > foreman start gives: C:/RailsInstaller/Ruby1.8.7/lib/ruby/site_ruby/1.8/rubygems/dependency.rb:247:in `to_specs': Could not find foreman (>= 0) amongst [POpen4-0.1.4, Platform-0.4.0, ZenTest-4.6.2, abstract-1.0.0, actionm ailer-3.0.11, actionmailer-3.0.9, actionpack-3.0.11, actionpack-3.0.9, activemodel-3.0.11, activemodel-3.0.9, activerecord-3.0.11, activerecord-3.0.9, activerecord-sqlserver-adapter-3.0.15, activereso urce-3.0.11, activeresource-3.0.9, activesupport-3.0.11, activesupport-3.0.9, addressable-2.2.6, annotate-2.4.0, arel-2.0.10, autotest-4.4.6, autotest-growl-0.2.16, autotest-rails-pure-4.1.2, autotest -standalone-4.5.8, builder-2.1.2, bundler-1.0.15, diff-lcs-1.1.3, erubis-2.6.6, factory_girl-1.3.3, factory_girl_rails-1.0, faker-0.3.1, gravatar_image_tag-1.0.0.pre2, heroku-2.14.0, i18n-0.5.0, json- 1.6.1, launchy-2.0.5, mail-2.2.19, mime-types-1.17.2, mime-types-1.16, nokogiri-1.5.0-x86-mingw32, open4-1.1.0, pg-0.11.0-x86-mingw32, polyglot-0.3.3, polyglot-0.3.1, rack-1.2.4, rack-1.2.3, rack-moun t-0.6.14, rack-test-0.5.7, rails-3.0.11, rails-3.0.9, railties-3.0.11, railties-3.0.9, rake-0.9.2.2, rake-0.8.7, rb-readline-0.4.0, rdoc-3.11, rdoc-3.8, rest-client-1.6.7, rspec-2.6.0, rspec-core-2.6. 4, rspec-expectations-2.6.0, rspec-mocks-2.6.0, rspec-rails-2.6.1, rubygems-update-1.8.11, rubyzip-0.9.4, rubyzip2-2.0.1, spork-0.9.0.rc8-x86-mingw32, sqlite3-1.3.3-x86-mingw32, sqlite3-ruby-1.3.3, te rm-ansicolor-1.0.7, thor-0.14.6, tiny_tds-0.4.5-x86-mingw32, treetop-1.4.10, treetop-1.4.9, tzinfo-0.3.31, tzinfo-0.3.29, webrat-0.7.1, will_paginate-3.0.pre2, win32-api-1.4.8-x86-mingw32, win32-open3 -0.3.2-x86-mingw32, win32-process-0.6.5, windows-api-0.4.0, windows-pr-1.2.1, zip-2.0.2] (Gem::LoadError) from C:/RailsInstaller/Ruby1.8.7/lib/ruby/site_ruby/1.8/rubygems/dependency.rb:256:in `to_spec' from C:/RailsInstaller/Ruby1.8.7/lib/ruby/site_ruby/1.8/rubygems.rb:1210:in `gem' from C:/Program Files (x86)/ruby-1.9.3/bin/foreman:18 Since I'm a complete noob in this I'm not sure if my question here is a duplicate for Error on 'foreman start' while following the Python/Flask Heroku tutorial (because it's not quite the same error). If so, does anyone have a method for deploying a development environment on windows (for Heruko, Python, Facebook app)? Or should I use Ubuntu for this? Thanks
Although this question doesn't seem to be of interest to anyone here (5 views in ~2 hours, 0 answers, 0 comments...), I have found the solution and ready to share it with anyone that will encounter it: Install the latest ruby from rubyinstaller.org (1.9.3-p194) - Sometimes there is a collision installs of the same version, in my case I've just uninstalled all versions of ruby, but if you already have other application that needs older version then you have to be more careful Check that your system is default to use this version by invoking ruby -v in command line prompt: and getting ruby 1.9.3p194 (2012-04-20) [i386-mingw32] (you may have to close and re-open cmd, to include the new environment variables) Still in cmd, invoke: gem install foreman gem install taps now go to your Procfile app (e.g. your heroku example app from the tutorial) and execute foreman start, you should see something like this: 18:23:52 web.1 | started with pid 7212 18:23:54 web.1 | * Running on http://0.0.0.0:5000/ 18:23:54 web.1 | * Restarting with reloader
Foreman
11,434,287
18
I simply followed the getting started with nodejs tutorial from Heroku. https://devcenter.heroku.com/articles/getting-started-with-nodejs#declare-process-types-with-procfile But I get an error at the part "declare process types with procfile" My problem is that my cmd (using windows 7) didn't find the command "foreman" Any solutions ? I downloaded/installed the heroku toolbelt, the login works fine, but foreman dont
I had the same problem on Windows7 64-bit, using git's bash. Here's what I did: uninstall the toolbelt, Ruby, and Git using Control Panel's "Program and Features" reinstall the toolbelt to C:\Heroku (see known issue for more info) add C:\Program Files (x86)\git\bin;C:\Heroku\ruby-1.9.2\bin to the system PATH variable: Control Panel, System, Advanced system settings, Environment Variables..., System variables, Variable Path, Edit... (Change ruby-1.9.2 if a future version of the toolbelt includes a newer version of Ruby.) open a git bash window and uninstall foreman version 0.63$ gem uninstall foreman then install version 0.61 (see here for more info)$ gem install foreman -v 0.61 Now foreman worked for me: $ foreman start
Foreman
19,078,939
18
I am trying to use foreman to start my rails app. Unfortunately I have difficulties connecting my IDE for debugging. I read here about using Debugger.wait_connection = true Debugger.start_remote to start a remote debugging session, but that does not really work out. Question: Is there a way to debug a rails (3.2) app started by foreman? If so, what is the approach?
If you use several workers with full rails environment you could use the following initializer: # Enabled debugger with foreman, see https://github.com/ddollar/foreman/issues/58 if Rails.env.development? require 'debugger' Debugger.wait_connection = true def find_available_port server = TCPServer.new(nil, 0) server.addr[1] ensure server.close if server end port = find_available_port puts "Remote debugger on port #{port}" Debugger.start_remote(nil, port) end And in the foreman's logs you'll be able to find debugger's ports: $ foreman start 12:48:42 web.1 | started with pid 29916 12:48:42 worker.1 | started with pid 29921 12:48:44 web.1 | I, [2012-10-30T12:48:44.810464 #29916] INFO -- : listening on addr=0.0.0.0:5000 fd=10 12:48:44 web.1 | I, [2012-10-30T12:48:44.810636 #29916] INFO -- : Refreshing Gem list 12:48:47 web.1 | Remote debugger on port 59269 12:48:48 worker.1 | Remote debugger on port 41301 Now run debugger using: rdebug -c -p [PORT]
Foreman
9,558,576
17
I am trying to deploy an Heroku app. I must be doing something wrong with the Procfile. When I run foreman check I get this error. ERROR: no processes defined I get pretty much the same thing when deploying on Heroku -----> Building runtime environment -----> Discovering process types ! Push failed: cannot parse Procfile. The Procfile looks like this web: node app.js What did I miss? update I re-did all from the start, It works properly now. I think I might have issue with Unix line ending
Just encounter "Push failed: cannot parse Procfile." on Windows. I can conclude that It IS "Windows-file format" problem, NOT the context of file itself. make sure to create a clean file, maybe use Notepad++ or other advanced editor to check the file type.
Foreman
19,846,342
17
We have rails app that is running some foreman processes with bundle exec foreman start, and have googled a lot of different things, and found that the common suggestion is to set up another background process handler, and export the processes there. So essentially let someone else do foreman's job of managing the processes. My question is how do you simply stop or restart foreman processes, as I don't really want to try to export the processes to another manager. Shouldn't there be a simple: foreman restart Since there is a: foreman start Is there a snippet or some other command that anyone has used to restart these processes? Any help or explanation of the foreman tool would be appreciated.
Used monit to control stop and start of foreman processes.
Foreman
18,925,483
16
I have the following Procfile: web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb redis: bundle exec redis-server /usr/local/etc/redis.conf worker: bundle exec sidekiq Running $ foreman start starts up Unicorn, Redis and Sidekiq, but how should i stop them again? Killing Foreman leaves all three up. I can see this using ps: $ ps aux | grep redis | grep -v grep me 61560 0.0 0.0 2506784 1740 s000 S+ 9:36am 0:01.28 redis-server /usr/local/etc/redis.conf $ ps aux | grep sidekiq | grep -v grep me 61561 0.0 1.0 2683796 173284 s000 S+ 9:36am 0:14.18 sidekiq 2.17.0 pathways [0 of 25 busy] $ ps aux | grep unicorn | grep -v grep me 61616 0.0 0.2 2615284 28312 s000 S+ 9:37am 0:00.06 unicorn worker[2] -p 5000 -c ./config/unicorn.rb me 61615 0.0 0.2 2615284 27920 s000 S+ 9:37am 0:00.06 unicorn worker[1] -p 5000 -c ./config/unicorn.rb me 61614 0.0 0.2 2615284 27772 s000 S+ 9:37am 0:00.06 unicorn worker[0] -p 5000 -c ./config/unicorn.rb me 61559 0.0 1.0 2615284 160988 s000 S+ 9:36am 0:09.87 unicorn master -p 5000 -c ./config/unicorn.rb So obviously I can manually kill each process, but how can I kill all at once? It doesn't seem like Foreman supports this.
To kill them all with a one-liner: $ kill $(ps aux | grep -E 'redis|sidekiq|unicorn' | grep -v grep | awk '{print $2}')
Foreman
20,190,152
14
I know for a fact that Flask, in debug mode, will detect changes to .py source code files and will reload them when new requests come in. I used to see this in my app all the time. Change a little text in an @app.route decoration section in my views.py file, and I could see the changes in the browser upon refresh. But all of a sudden (can't remember what changed), this doesn't seem to work anymore. Q: Where am I going wrong? I am running on a OSX 10.9 system with a VENV setup using Python 2.7. I use foreman start in my project root to start it up. App structure is like this: [Project Root] +-[app] | +-__init__.py | +- views.py | +- ...some other files... +-[venv] +- config.py +- Procfile +- run.py The files look like this: # Procfile web: gunicorn --log-level=DEBUG run:app # config.py contains some app specific configuration information. # run.py from app import app if __name__ == "__main__": app.run(debug = True, port = 5000) # __init__.py from flask import Flask from flask.ext.login import LoginManager from flask.ext.sqlalchemy import SQLAlchemy from flask.ext.mail import Mail import os app = Flask(__name__) app.config.from_object('config') db = SQLAlchemy(app) #mail sending mail = Mail(app) lm = LoginManager() lm.init_app(app) lm.session_protection = "strong" from app import views, models # app/views.py @app.route('/start-scep') def start_scep(): startMessage = '''\ <html> <header> <style> body { margin:40px 40px;font-family:Helvetica;} h1 { font-size:40px; } p { font-size:30px; } a { text-decoration:none; } </style> </header> <p>Some text</p> </body> </html>\ ''' response = make_response(startMessage) response.headers['Content-Type'] = "text/html" print response.headers return response
The issue here, as stated in other answers, is that it looks like you moved from python run.py to foreman start, or you changed your Procfile from # Procfile web: python run.py to # Procfile web: gunicorn --log-level=DEBUG run:app When you run foreman start, it simply runs the commands that you've specified in the Procfile. (I'm going to guess you're working with Heroku, but even if not, this is nice because it will mimic what's going to run on your server/Heroku dyno/whatever.) So now, when you run gunicorn --log-level=DEBUG run:app (via foreman start) you are now running your application with gunicorn rather than the built in webserver that comes with Flask. The run:app argument tells gunicorn to look in run.py for a Flask instance named app, import it, and run it. This is where it get's fun: since the run.py is being imported, __name__ == '__main__' is False (see more on that here), and so app.run(debug = True, port = 5000) is never called. This is what you want (at least in a setting that's available publicly) because the webserver that's built into Flask that's used when app.run() is called has some pretty serious security vulnerabilities. The --log-level=DEBUG may also be a bit deceiving since it uses the word "DEBUG" but it's only telling gunicorn which logging statements to print and which to ignore (check out the Python docs on logging.) The solution is to run python run.py when running the app locally and working/debugging on it, and only run foreman start when you want to mimic a production environment. Also, since gunicorn only needs to import the app object, you could remove some ambiguity and change your Procfile to # Procfile web: gunicorn --log-level=DEBUG app:app You could also look into Flask Script which has a built in command python manage.py runserver that runs the built in Flask webserver in debug mode.
Foreman
23,400,599
14
I am having trouble getting my dynos to run multiple delayed job worker processes. My Procfile looks like this: worker: bundle exec script/delayed_job -n 3 start and my delayed_job script is the default provided by the gem: #!/usr/bin/env ruby require File.expand_path(File.join(File.dirname(__FILE__), '..', 'config', 'environment')) require 'delayed/command' Delayed::Command.new(ARGV).daemonize When I try to run this either locally or on a Heroku dyno it exits silently and I can't tell what is going on. foreman start 16:09:09 worker.1 | started with pid 75417 16:09:15 worker.1 | exited with code 0 16:09:15 system | sending SIGTERM to all processes SIGTERM received Any help with either how to debug the issue or suggestions about other ways to go about running multiple workers on a single dyno it would be greatly appreciated.
You can use foreman to start multiple processes on the same dyno. First, add foreman to your Gemfile. Then add a worker line to your Procfile: worker: bundle exec foreman start -f Procfile.workers Create a new file called Procfile.workers which contains: dj_worker: bundle exec rake jobs:work dj_worker: bundle exec rake jobs:work dj_worker: bundle exec rake jobs:work That will start 3 delayed_job workers on your worker dyno.
Foreman
24,792,399
12
binding.pry not works(console input not available) if i start the server with bin/dev command. It only works with bin/rails s command. I understand it has something to do with foreman and Procfile.dev, but I don't know how. Is this a bug or is it supposed to be like this?
With bin/dev, the Procfile.dev file is run with foreman. The pry issue is caused by the CSS and JS watchers:these just listen to changes in your CSS and JS files. What you can do is remove the web: unset PORT && bin/rails server command from your Procfile, so it will only have the CSS and JS watchers and look like this: js: yarn build --watch css: yarn build:css --watch Now you'll have to open two terminals, one with bin/rails s and the other with foreman start -f Procfile.dev. This way your pry works in the server terminal as normal and the watchers are watching as normal.
Foreman
72,532,475
12
I want the foreman gem to use the PORT value provided in the my development env file instead of using its own values. My files setup is shown below: A bash script to start foreman: foreman start -e development.env The development.env file content: PORT=3000 The Procfile content web: bundle exec rails server thin -p $PORT -e $RAILS_ENV $1 The dev server ends up starting on port 5000. I know I can start foreman with --p 3000 to force it to use that port. But that defeats the purpose of the env file. Any suggestions?
I know this is an old post but it took me a while to figure out so might as well add a note here. Foreman increments the PORT based on where your define the service in the Procfile. Say our PORT environment variable is set to 3000. In our first Procfile example Puma will run on PORT 3000: web: bundle exec puma -q -p $PORT worker: bundle exec rake jobs:work But in our second Procfile it will run on PORT 3100 as the PORT variable is used on the second line. worker: bundle exec rake jobs:work web: bundle exec puma -q -p $PORT Not sure why, I guess to prevent different processes from trying to take the same PORT.
Foreman
9,804,184
11
I have the following Rake task: namespace :foreman do task :dev do `foreman start -f Procfile.dev` end end desc "Run Foreman using Procfile.dev" task :foreman => 'foreman:dev' The forman command works fine from the shell, however when I run rake foreman I get the following error: /Users/me/.gem/ruby/2.0.0/gems/bundler-1.5.2/lib/bundler/rubygems_integration.rb:240:in `block in replace_gem': foreman is not part of the bundle. Add it to Gemfile. (Gem::LoadError) from /Users/me/.gem/ruby/2.0.0/bin/foreman:22:in `<main>' Forman specifically states: Ruby users should take care not to install foreman in their project's Gemfile So how can I get this task to run?
If you must make it work via rake, try changing the shell-out via backtick to use a hard-coded path to the system-wide foreman binary `/global/path/to/foreman start -f Procfile.dev` You just need to use 'which' or 'locate' or a similar tool to determine the path that works outside your bundler context. If you are using rbenv, then this might be sufficient : $ rbenv which rake /home/name/.rbenv/versions/1.9.3-p448/bin/rake I hope that helps you move forward.
Foreman
27,189,450
11
Is there a way to download and install heroku toolbelt components individually, or at least without the bundled git? Heroku Toolbelt comes with git bundled in. Last time I downloaded it and installed it, it overwrote my existing git installation. Heroku Toolbelt bundles an older version of git and I require at least 1.7.10. Is there a way to just install heroku and foreman? This seems a little weird that there isn't such an option considering most heroku users would be developer likely to have git already.
It's just Foreman, Git, and the Heroku CLI client. If you already have Git and Foreman, you can just install the CLI from the command line, wget -qO- https://toolbelt.heroku.com/install.sh | sh The Windows installer offers the same options.
Foreman
12,322,473
10
we are trying to install couple of python packages without internet. For ex : python-keystoneclient For that we have the packages downloaded from https://pypi.python.org/pypi/python-keystoneclient/1.7.1 and kept it in server. However, while installing tar.gz and .whl packages , the installation is looking for dependent packages to be installed first. Since there is no internet connection in the server, it is getting failed. For ex : For python-keystoneclient we have the following dependent packages stevedore (>=1.5.0) six (>=1.9.0) requests (>=2.5.2) PrettyTable (<0.8,>=0.7) oslo.utils (>=2.0.0) oslo.serialization (>=1.4.0) oslo.i18n (>=1.5.0) oslo.config (>=2.3.0) netaddr (!=0.7.16,>=0.7.12) debtcollector (>=0.3.0) iso8601 (>=0.1.9) Babel (>=1.3) argparse pbr (<2.0,>=1.6) When i try to install packages one by one from the above list, once again its looking for nested dependency . Is there any way we could list ALL the dependent packages for installing a python module like python-keystoneclient.
This is how I handle this case: On the machine where I have access to Internet: mkdir keystone-deps pip download python-keystoneclient -d "/home/aviuser/keystone-deps" tar cvfz keystone-deps.tgz keystone-deps Then move the tar file to the destination machine that does not have Internet access and perform the following: tar xvfz keystone-deps.tgz cd keystone-deps pip install python_keystoneclient-2.3.1-py2.py3-none-any.whl -f ./ --no-index You may need to add --no-deps to the command as follows: pip install python_keystoneclient-2.3.1-py2.py3-none-any.whl -f ./ --no-index --no-deps
OpenStack
36,725,843
98
Are there any differences in images of Docker and Virtual Machine? Except the image formats, I couldn't find any info on this anywhere. Please comment out on the things like image size, instance creation time, capture time, etc. Thanks!
These are some differences between a docker and a VM image which I could list out: 1. Snapshot process is faster in Docker than VMs We generally start with a base image, and then make our changes, and commit those changes using docker, and it creates an image. This image contains only the differences from the base. When we want to run our image, we also need the base, and it layers our image on top of the base using a layered file system. File system merges the different layers together and we get what we want, and we just need to run it. Since docker typically builds on top of ready-made images from a registry, we rarely have to "snapshot" the whole OS ourself. This ability of Dockers to snapshot the OS into a common image also makes it easy to deploy on other docker hosts. 2. Startup time is less for Docker than VMs A virtual machine usually takes minutes to start, but containers takes seconds, and sometime even less than a second. 4. Docker images have more portability Docker images are composed of layers. When we pull or transfer an image, only the layers we haven’t yet in cache are retrieved. That means that if we use multiple images based on the same base Operating System, the base layer is created or retrieved only once. VM images doesn't have this flexibility. 5. Docker provides versioning of images We can use the docker commit command. We can specify two flags: -m and -a. The -m flag allows us to specify a commit message, much like we would with a commit on a version control system: $ sudo docker commit -m "Added json gem" -a "Kate Smith" 0b2616b0e5a8 ouruser/sinatra:v2 4f177bd27a9ff0f6dc2a830403925b5360bfe0b93d476f7fc3231110e7f71b1c 6. Docker images do not have states In Docker terminology, a read-only Layer is called an image. An image never changes. Since Docker uses a Union File System, the processes think the whole file system is mounted read-write. But all the changes go to the top-most writeable layer, and underneath, the original file in the read-only image is unchanged. Since images don't change, images do not have state. 7. VMs are hardware-centric and docker containers are application-centric Let's say we have a container image that is 1GB in size. If we wanted to use a Full VM, we would need to have 1GB times x number of VMs you want. In docker container we can share the bulk of the 1GB and if you have 1000 containers we still might only have a little over 1GB of space for the containers OS, assuming they are all running the same OS image. 8. Supported image formats Docker images: bare. The image does not have a container or metadata envelope. ovf. The OVF container format. aki. An Amazon kernel image. ari. An Amazon ramdisk image. ami. An Amazon machine image. VM images: raw. An unstructured disk image format; if you have a file without an extension it is possibly a raw format vhd. The VHD disk format, a common disk format used by virtual machine monitors from VMware, Xen, Microsoft, VirtualBox, and others vmdk. Common disk format supported by many common virtual machine monitors vdi. Supported by VirtualBox virtual machine monitor and the QEMU emulator iso. An archive format for the data contents of an optical disc, such as CD-ROM. qcow2. Supported by the QEMU emulator that can expand dynamically and supports Copy on Write aki. An Amazon kernel image. ari. An Amazon ramdisk image. ami. An Amazon machine image.
OpenStack
29,096,967
28
I'm really trying to understand the under the hood of keystone regarding the relationships among endpoints, regions, tenants, services, users and roles. I've tried to find the related documents but sadly, failed. Could anybody give any pointers or explanations?
Keystone is the identity management service for OpenStack. Essentially it's role is to grant tokens to users be they people, services, or anything at all. If you make an API query anywhere in OpenStack, keystone's API is how it is discovered if you are allowed to make that API query. Let's work our way up from the ground. Users. Users in Keystone today are generally people. There isn't enough fine grained ACL support at this moment to really call many of the users in OpenStack a 'service' account in a traditional sense. But there is a service account that is used as a backhaul connection to the Keystone API as part of the OpenStack infrastructure itself. We'll avoid delving into that anomalous user. When a user authenticates to Keystone ( you hit up the OS_AUTH_URL to talk to keystone.. usually port 5000 of the keystone api box ), the user says hey " I am user X , I have password Y, and I belong to tenant Z". X can be a username or userid ( unique uuid of user ) Y is a password, but you can authenticate with a token as well. Z is a tenant name or tenant id ( unique uuid of tenant ). in past Keystone APIs you didn't NEED to specify a tenant name, but your token wouldn't be very useful if you didn't as the token wouldn't be associated with your tenant and you would then be denied any ACLs on that tenant. So... a user is a fairly obvious thing. A password is a fairly obvious thing. But what's a tenant? Well a tenant is also known as a project. In fact, there have been repeated attempts to make the name be either tenant or project, but as a result of an inability to stick to just one term they both mean the same thing. As far as the API is concerned a project IS a tenant. So if you log into horizon you will see a drop down for your projects. Each project corresponds to a tenant id. Your tokens are associated with a specific tenant id as well. So you may need several tokens for a user if you intend to work on several tenants the user is attached to. Now, say you add a user to the tenant id of admin. Does that user get admin privileges? The answer is no. That's where roles come into play. While the user in the admin tenant may have access to admin virtual machines and quotas for spinning up virtual machines that user wouldn't be able to do things like query keystone for a user list. But if you add an admin role to that user, they will be endowed with the ACL rights to act as an admin in the keystone API, and other APIs. So think of a tenant as a sort of resource group, and roles as an ACL set. regions are more like ways to geographically group physical resources in the openstack infrastructure environment. say you have two segmented data centers. you might put one in region A of your openstack environment and another in region B. regions in terms of their usefulness are quickly evolving, especially with the introduction of cells and domains in more recent openstack releases. You probably don't need to be a master of this knowledge unless you intend to be architecting large clouds. keystone provides one last useful thing. the catalog. the keystone catalog is kind of like the phone book for the openstack APIs. whenever you use a command line client, like when you might call nova list to list your instances, nova first authenticates to keystone and gets you a token to use the API, but it also immediately asks keystone catalog for a list of API endpoints. For keystone, cinder, nova, glance, swift... etc. nova will really only use the nova-api endpoint, though depending on your query you may use the keystone administrative API endpoint.... we'll get back to that. But essentially the catalog is a canonical source of information for where APIs are in the world. That way you only ever need to tell a client where the public API endpoint of keystone is, and it can figure out the rest from the catalog. Now, I've made reference to the public API, and the administrative API for keystone. Yep keystone has two APIs... sort of. It runs an API on port 5000 and another one up in the 32000 range. The 5000 is the public port. This is where you do things like find the catalog, and ask for a token so you can talk to other APIs. It's very simple, and somewhat hardened. The administrative API would be used for things like changing a users password, or adding a new role to a user. Pretty straight forward?
OpenStack
19,004,503
20
I had to install OpenStack using devstack infrastructure for experiements with open vSwitch, and found this in the logs: /usr/lib/python2.7/site-packages/setuptools/dist.py:298: UserWarning: The version specified ('2014.2.2.dev5.gb329598') is an invalid version, this may not work as expected with newer versions of setuptools, pip, and PyPI. Please see PEP 440 for more details. I googled and found PEP440, but I wonder how serious is this warning?
Each Python package can specify its own version. Among other things, PEP440 says that a version specification should be stored in the __version__ attribute of the module, that it should be a string, and that should consist of major version number, minor version number and build number separated by dots (e.g. '2.7.8') give or take a couple of other optional variations. In one of the packages you are installing, the developers appear to have broken these recommendations by using the suffix '.gb329598'. The warning says that this may confuse certain package managers (setuptools and friends) in some circumstances. It seems PEP440 does allow arbitrary "local version labels" to be appended to a version specifier, but these must be affixed with a '+', not a '.'.
OpenStack
27,493,792
20
I'm having a problem with Python generators while working with the Openstack Swift client library. The problem at hand is that I am trying to retrieve a large string of data from a specific url (about 7MB), chunk the string into smaller bits, and send a generator class back, with each iteration holding a chunked bit of the string. in the test suite, this is just a string that's sent to a monkeypatched class of the swift client for processing. The code in the monkeypatched class looks like this: def monkeypatch_class(name, bases, namespace): '''Guido's monkeypatch metaclass.''' assert len(bases) == 1, "Exactly one base class required" base = bases[0] for name, value in namespace.iteritems(): if name != "__metaclass__": setattr(base, name, value) return base And in the test suite: from swiftclient import client import StringIO import utils class Connection(client.Connection): __metaclass__ = monkeypatch_class def get_object(self, path, obj, resp_chunk_size=None, ...): contents = None headers = {} # retrieve content from path and store it in 'contents' ... if resp_chunk_size is not None: # stream the string into chunks def _object_body(): stream = StringIO.StringIO(contents) buf = stream.read(resp_chunk_size) while buf: yield buf buf = stream.read(resp_chunk_size) contents = _object_body() return headers, contents After returning the generator object, it was called by a stream function in the storage class: class SwiftStorage(Storage): def get_content(self, path, chunk_size=None): path = self._init_path(path) try: _, obj = self._connection.get_object( self._container, path, resp_chunk_size=chunk_size) return obj except Exception: raise IOError("Could not get content: {}".format(path)) def stream_read(self, path): try: return self.get_content(path, chunk_size=self.buffer_size) except Exception: raise OSError( "Could not read content from stream: {}".format(path)) And finally, in my test suite: def test_stream(self): filename = self.gen_random_string() # test 7MB content = self.gen_random_string(7 * 1024 * 1024) self._storage.stream_write(filename, io) io.close() # test read / write data = '' for buf in self._storage.stream_read(filename): data += buf self.assertEqual(content, data, "stream read failed. output: {}".format(data)) The output ends up with this: ====================================================================== FAIL: test_stream (test_swift_storage.TestSwiftStorage) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/bacongobbler/git/github.com/bacongobbler/docker-registry/test/test_local_storage.py", line 46, in test_stream "stream read failed. output: {}".format(data)) AssertionError: stream read failed. output: <generator object _object_body at 0x2a6bd20> I tried isolating this with a simple python script that follows the same flow as the code above, which passed without issues: def gen_num(): def _object_body(): for i in range(10000000): yield i return _object_body() def get_num(): return gen_num() def stream_read(): return get_num() def main(): num = 0 for i in stream_read(): num += i print num if __name__ == '__main__': main() Any help with this issue is greatly appreciated :)
In your get_object method, you're assigning the return value of _object_body() to the contents variable. However, that variable is also the one that holds your actual data, and it's used early on in _object_body. The problem is that _object_body is a generator function (it uses yield). Therefore, when you call it, it produces a generator object, but the code of the function doesn't start running until you iterate over that generator. Which means that when the function's code actually starts running (the for loop in _test_stream), it's long after you've reassigned contents = _object_body(). Your stream = StringIO(contents) therefore creates a StringIO object containing the generator object (hence your error message), not the data. Here's a minimal reproduction case that illustrates the problem: def foo(): contents = "Hello!" def bar(): print contents yield 1 # Only create the generator. This line runs none of the code in bar. contents = bar() print "About to start running..." for i in contents: # Now we run the code in bar, but contents is now bound to # the generator object. So this doesn't print "Hello!" pass
OpenStack
20,429,971
19
I am searching for options that enable dynamic cloud-based NVIDIA GPU virtualization similar to the way AWS assigns GPUs for Cluster GPU Instances. My project is working on standing up an internal cloud. One requirement is the ability to allocate GPUs to virtual-machines/instances for server-side CUDA processing. USC appears to be working on OpenStack enhancements to support this but it isn't ready yet. This would be exactly what I am looking for if it were fully functional in OpenStack. NVIDIA VGX seems to only support allocation of GPUs to USMs, which is strictly remote-desktop GPU virtualization. If I am wrong, and VGX does enable server-side CUDA computing from virtual-machines/instances then please let me know.
"dynamic cloud-based NVIDIA GPU virtualization similar to the way AWS assigns GPUs for Cluster GPU Instances." AWS does not really allocate GPUs dynamically: Each GPU Cluster Compute has 2 fixed GPUs. All other servers (including the regular Cluster Compute) don't have any GPUs. I.e. they don't have an API where you can say "GPU or not", it's fixed to the box type, which uses fixed hardware. The pass-thru mode on Xen was made specifically for your use case: Passing hardware on thru from the Host to the Guest. It's not 'dynamic' by default, but you could write some code that chooses one of the guests to get each card on the host.
OpenStack
14,505,941
15
Does devstack completely install openstack? I read somewhere that devStack is not and has never been intended to be a general OpenStack installer. So what does devstack actually install? Is there any other scripted method available to completely install openstack(grizzly release) or I need to follow the manual installation steps given on openstack website?
devstack does completely install from git openstack. for lesser values of completely anyways. devstack is the version of openstack used in jenkins gate testing by developers committing code to the openstack project. devstack as the name suggests is specifically for developing for openstack. as such it's existence is ephemeral. in short, after running stack.sh the resulting ( probably ) functioning openstack is setup... but upon reboot it will not come back up. there are no upstart or systemd or init.d scripts for restarting services. there is no high availability, no backups, no configuration management. And following the latest git releases in the development branch of openstack can be a great way to discover just how unstable openstack is before a feature freeze. there are several vagrant recipes in the world for deploying openstack, and openstack-puppet is a puppet recipe for deploying openstack. chef also maintains an openstack recipe as well. Grizzly is a bit old now. Havana is the current stable release. https://github.com/stackforge/puppet-openstack http://docs.opscode.com/openstack.html http://cloudarchitectmusings.com/2013/12/01/deploy-openstack-havana-on-your-laptop-using-vagrant-and-chef/ and ubuntu even maintains a system called maas and juju for deploying openstack super quickly on their OS. https://help.ubuntu.com/community/UbuntuCloudInfrastructure http://www.youtube.com/watch?v=mspwQfoYQks so lots of ways to install openstack. however most folks pushing a production cloud use some form of configuration management system. that way they can deploy compute nodes automatically. and recover systems quickly. also check out openstack on openstack. https://wiki.openstack.org/wiki/TripleO
OpenStack
21,729,860
14
My question is similar to this git hub post: https://github.com/hashicorp/terraform/issues/745 It is also related to another stack exchange post of mine: Terraform stalls while trying to get IP addresses of multiple instances? I am trying to bootstrap several servers and there are several commands I need to run on my instances that require the IP addresses of all the other instances. However I cannot access the variables that hold the IP addresses of my newly created instances until they are created. So when I try to run a provisioner "remote-exec" block like this: provisioner "remote-exec" { inline = [ "sudo apt-get update", "sudo apt-get install -y curl", "echo ${openstack_compute_instance_v2.consul.0.network.0.fixed_ip_v4}", "echo ${openstack_compute_instance_v2.consul.1.network.1.fixed_ip_v4}", "echo ${openstack_compute_instance_v2.consul.2.network.2.fixed_ip_v4}" ] } Nothing happens because all the instances are waiting for all the other instances to finish being created and so nothing is created in the first place. So I need a way for my resources to be created and then run my provisioner "remote-exec" block commands after they are created and terraform can access the IP addresses of all my instances.
The solution is to create a resource "null_resource" "nameYouWant" { } and then run your commands inside that. They will run after the initial resources are created: resource "aws_instance" "consul" { count = 3 ami = "ami-ce5a9fa3" instance_type = "t2.micro" key_name = "ansible_aws" tags { Name = "consul" } } resource "null_resource" "configure-consul-ips" { count = 3 connection { user = "ubuntu" private_key="${file("/home/ubuntu/.ssh/id_rsa")}" agent = true timeout = "3m" } provisioner "remote-exec" { inline = [ "sudo apt-get update", "sudo apt-get install -y curl", "sudo echo '${join("\n", aws_instance.consul.*.private_ip)}' > /home/ubuntu/test.txt" ] } } Also see the answer here: Terraform stalls while trying to get IP addresses of multiple instances? Thank you so much @ydaetskcor for the answer
OpenStack
37,865,979
12
I have no experience in openstack and would appreciate anyone who can help and guide me with this issue. I'm installing openstack in virtual environment (Ubuntu 12.04) and this came out: git clone git//git.openstack.org/openstack/requirements.git/opt/stack/reqiurements Cloning into '/opt/stack/requirements'... fatal:unable to connect to git.openstack.org: git.openstack.org[0: 192.237.223.224]: errno=Connection refused git.openstack.org[1: 2001:4800:7813:516:3bc3:d7f6:ff04:aacb]: errno=Network is unreachable
I had the same problem, the git protocol is blocked in my testing environment. The solution is to modify the sourcerc file in the devstack installation folder to use https instead of git. You have to look for that line and change it. This file is also known as the local.conf file. Default setting in sourcerc file: GIT_BASE=${GIT_BASE:-git://git.openstack.org} Modified setting that should bypass git restrictions: GIT_BASE=${GIT_BASE:-https://git.openstack.org} Simply add this modified line to the local/localrc section of your local.conf file in the DevStack directory and it should use the HTTPS protocol instead of the Git protocol! More info on the local.conf file here - http://devstack.org/configuration.html
OpenStack
20,390,267
11
I am looking into the python shade module in order to automate some tasks using our OpenStack installation. This page instructs: Create a configuration file to store your user name, password, project_name in ~/.config/openstack/clouds.yml. I had a close look; but I couldn't find any information how to provide credentials in a different way; for example as parameters to some objects that I could create within python code. Long story short: is that even possible? Or does this requirement immediately force me "off shade"; and to use the OpenStack python sdk instead?
I am not a python expert, but after some searching how "other" openclient modules do it; maybe the following could work (example code from your link; just a bit of enhancement): from shade import * auth_data = { # URL to the Keystone API endpoint. 'auth_url': 'url', # User credentials. 'user_domain_name': ... } to later do this: cloud = openstack_cloud(cloud='your-cloud', **auth_data)
OpenStack
42,222,387
11
I am dealing with creating AWS API Gateway. I am trying to create CloudWatch Log group and name it API-Gateway-Execution-Logs_${restApiId}/${stageName}. I have no problem in Rest API creation. My issue is in converting restApi.id which is of type pulumi.Outout to string. I have tried these 2 versions which are proposed in their PR#2496 const restApiId = apiGatewayToSqsQueueRestApi.id.apply((v) => `${v}`); const restApiId = pulumi.interpolate `${apiGatewayToSqsQueueRestApi.id}` here is the code where it is used const cloudWatchLogGroup = new aws.cloudwatch.LogGroup( `API-Gateway-Execution-Logs_${restApiId}/${stageName}`, {}, ); stageName is just a string. I have also tried to apply again like const restApiIdStrign = restApiId.apply((v) => v); I always got this error from pulumi up aws:cloudwatch:LogGroup API-Gateway-Execution-Logs_Calling [toString] on an [Output<T>] is not supported. Please help me convert Output to string
@Cameron answered the naming question, I want to answer your question in the title. It's not possible to convert an Output<string> to string, or any Output<T> to T. Output<T> is a container for a future value T which may not be resolved even after the program execution is over. Maybe, your restApiId is generated by AWS at deployment time, so if you run your program in preview, there's no value for restApiId. Output<T> is like a Promise<T> which will be eventually resolved, potentially after some resources are created in the cloud. Therefore, the only operations with Output<T> are: Convert it to another Output<U> with apply(f), where f: T -> U Assign it to an Input<T> to pass it to another resource constructor Export it from the stack Any value manipulation has to happen within an apply call.
Pulumi
62,561,660
18
I don't see any options in the documentation on how to delete imported resources from my stack. If I try to remove the resource's reference from my code I get the following error when running pulumi up: error: Preview failed: refusing to delete protected resource 'urn:pulumi:dev::my-cloud-infrastructure::aws:iam/instanceProfile:InstanceProfile::EC2CodeDeploy'
As answered in the Pulumi Slack community channel, one can use the command: pulumi state delete <urn> This will remove the reference from your state file but not from aws. Also, if the resource is protected you'll first have to unprotect it or run the above command with the flag --force.
Pulumi
66,162,196
16
I'm building a macOS app via Xcode. Every time I build, I get the log output: Metal API Validation Enabled To my knowledge my app is not using any Metal features. I'm not using hardware-accelerated 3D graphics or shaders or video game features or anything like that. Why is Xcode printing Metal API log output? Is Metal being used in my app? Can I or should I disable it? How can I disable this "Metal API Validation Enabled" log message?
Toggle Metal API Validation via your Xcode Scheme: Scheme > Edit Scheme... > Run > Diagnostics > Metal API Validation. It's a checkbox, so the possible options are Enabled or Disabled. Disabling sets the key enableGPUValidationMode = 1 in your .xcscheme file. After disabling, Xcode no longer logs the "Metal API Validation Enabled" log message. Note: In Xcode 11 and below, the option appears in the "Options" tab of the Scheme Editor (instead of the "Diagnostics" tab).
Metal³
60,645,401
40
Task I would like to capture a real-world texture and apply it to a reconstructed mesh produced with a help of LiDAR scanner. I suppose that Projection-View-Model matrices should be used for that. A texture must be made from fixed Point-of-View, for example, from center of a room. However, it would be an ideal solution if we could apply an environmentTexturing data, collected as a cube-map texture in a scene. Look at 3D Scanner App. It's a reference app allowing us to export a model with its texture. I need to capture a texture with one iteration. I do not need to update it in a realtime. I realize that changing PoV leads to a wrong texture's perception, in other words, distortion of a texture. Also I realize that there's a dynamic tesselation in RealityKit and there's an automatic texture mipmapping (texture's resolution depends on a distance it captured from). import RealityKit import ARKit import Metal import ModelIO class ViewController: UIViewController, ARSessionDelegate { @IBOutlet var arView: ARView! override func viewDidLoad() { super.viewDidLoad() arView.session.delegate = self arView.debugOptions.insert(.showSceneUnderstanding) let config = ARWorldTrackingConfiguration() config.sceneReconstruction = .mesh config.environmentTexturing = .automatic arView.session.run(config) } } Question How to capture and apply a real world texture to a reconstructed 3D mesh?
Object Reconstruction 10 October 2023, Apple released iOS Reality Composer 1.6 app that is capable of capturing a real world model's mesh with texture in realtime using the LiDAR scanning process. But at the moment there's still no native programmatic API for that (but we are all looking forward to it). Also, there's a methodology that allows developers to create textured models from series of shots. Photogrammetry Object Capture API, announced at WWDC 2021, provides developers with the long-awaited photogrammetry tool. At the output we get USDZ model with UV-mapped hi-rez texture. To implement Object Capture API you need macOS 12+ and Xcode 13+. To create a USDZ model from a series of shots, submit all taken images to RealityKit's PhotogrammetrySession. Here's a code snippet that spills a light on this process: import RealityKit import Combine let pathToImages = URL(fileURLWithPath: "/path/to/my/images/") let url = URL(fileURLWithPath: "model.usdz") var request = PhotogrammetrySession.Request.modelFile(url: url, detail: .medium) var configuration = PhotogrammetrySession.Configuration() configuration.sampleOverlap = .normal configuration.sampleOrdering = .unordered configuration.featureSensitivity = .normal configuration.isObjectMaskingEnabled = false guard let session = try PhotogrammetrySession(input: pathToImages, configuration: configuration) else { return 
} var subscriptions = Set<AnyCancellable>() session.output.receive(on: DispatchQueue.global()) .sink(receiveCompletion: { _ in // errors }, receiveValue: { _ in // output }) .store(in: &subscriptions) session.process(requests: [request]) You can reconstruct USD and OBJ models with their corresponding UV-mapped textures.
Metal³
63,793,918
32
I want to set a MTLTexture object as the environment map of a scene, as it seems to be possible according to the documentation. I can set the environment map to be a UIImage with the following code: let roomImage = UIImage(named: "room") scene.lightingEnvironment.contents = roomImage This works and I see the reflection of the image on my metallic objects. I tried converting the image to a MTLTexture and setting it as the environment map with the following code: let roomImage = UIImage(named: "room") let loader = MTKTextureLoader(device: MTLCreateSystemDefaultDevice()!) let envMap = try? loader.newTexture(cgImage: (roomImage?.cgImage)!, options: nil) scene.lightingEnvironment.contents = envMap However this does not work and I end up with a blank environment map with no reflection on my objects. Also, instead of setting the options as nil, I tried setting the MTKTextureLoader.Option.textureUsage key with every possible value it can get, but that didn't work either. Edit: You can have a look at the example project in this repo and use it to reproduce this use case.
Lighting SCN Environment with an MTK texture Using Xcode 13.3.1 on macOS 12.3.1 for iOS 15.4 app. The trick is, the environment lighting requires a cube texture, not a flat image. Create 6 square images for MetalKit cube texture in Xcode Assets folder create Cube Texture Set place textures to their corresponding slots mirror images horizontally and vertically, if needed Paste the code: import ARKit import MetalKit class ViewController: UIViewController { @IBOutlet var sceneView: ARSCNView! override func viewDidLoad() { super.viewDidLoad() let scene = SCNScene() let imageName = "CubeTextureSet" let textureLoader = MTKTextureLoader(device: sceneView.device!) let environmentMap = try! textureLoader.newTexture(name: imageName, scaleFactor: 2, bundle: .main, options: nil) let daeScene = SCNScene(named: "art.scnassets/testCube.dae")! let model = daeScene.rootNode.childNode(withName: "polyCube", recursively: true)! scene.lightingEnvironment.contents = environmentMap scene.lightingEnvironment.intensity = 2.5 scene.background.contents = environmentMap sceneView.scene = scene sceneView.allowsCameraControl = true scene.rootNode.addChildNode(model) } } Apply metallic materials to models. Now MTL environment lighting is On. If you need a procedural skybox texture – use MDLSkyCubeTexture class. Also, this post may be useful for you.
Metal³
47,739,214
31
I'm creating a MTLTexture from CVImageBuffers (from camera and players) using CVMetalTextureCacheCreateTextureFromImage to get a CVMetalTexture and then CVMetalTextureGetTexture to get the MTLTexture. The problem I'm seeing is that when I later render the texture using Metal, I occasionally see video frames rendered out of order (visually it stutters back and forth in time), presumably because CoreVideo is modifying the underlying CVImageBuffer storage and the MTLTexture is just pointing there. Is there any way to make CoreVideo not touch that buffer and use another one from its pool until I release the MTLTexture object? My current workaround is blitting the texture using a MTLBlitCommandEncoder but since I just need to hold on to the texture for ~30 milliseconds that seems unnecessary.
I recently ran into this exact same issue. The problem is that the MTLTexture is not valid unless it's owning CVMetalTextureRef is still alive. You must keep a reference to the CVMetalTextureRef the entire time you're using the MTLTexture (all the way until the end of the current rendering cycle).
Metal³
43,550,769
21
I try to check out the new Samples from the new Metal API for iOS. When i download the code an open it in the XCode 6 Beta I'm getting the following error message: QuartzCore/CAMetalLayer.h file not found Do i need to add some other files or am I missing something else? The Metal API should be available in OSX 10.9.3. Is there any need to upgrade to Yosemite 10.10 Beta to run these examples?
Reason behind this error is it only works in device having A7+ chip, simulator will not work for this.
Metal³
24,145,101
20
What is the most efficient way to capture frames from a MTKView? If possible, I would like to save a .mov file from the frames in realtime. Is it possible to render into an AVPlayer frame or something? It is currently drawing with this code (based on @warrenm PerformanceShaders project): func draw(in view: MTKView) { _ = inflightSemaphore.wait(timeout: DispatchTime.distantFuture) updateBuffers() let commandBuffer = commandQueue.makeCommandBuffer() commandBuffer.addCompletedHandler{ [weak self] commandBuffer in if let strongSelf = self { strongSelf.inflightSemaphore.signal() } } // Dispatch the current kernel to perform the selected image filter selectedKernel.encode(commandBuffer: commandBuffer, sourceTexture: kernelSourceTexture!, destinationTexture: kernelDestTexture!) if let renderPassDescriptor = view.currentRenderPassDescriptor, let currentDrawable = view.currentDrawable { let clearColor = MTLClearColor(red: 0, green: 0, blue: 0, alpha: 1) renderPassDescriptor.colorAttachments[0].clearColor = clearColor let renderEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor) renderEncoder.label = "Main pass" renderEncoder.pushDebugGroup("Draw textured square") renderEncoder.setFrontFacing(.counterClockwise) renderEncoder.setCullMode(.back) renderEncoder.setRenderPipelineState(pipelineState) renderEncoder.setVertexBuffer(vertexBuffer, offset: MBEVertexDataSize * bufferIndex, at: 0) renderEncoder.setVertexBuffer(uniformBuffer, offset: MBEUniformDataSize * bufferIndex , at: 1) renderEncoder.setFragmentTexture(kernelDestTexture, at: 0) renderEncoder.setFragmentSamplerState(sampler, at: 0) renderEncoder.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4) renderEncoder.popDebugGroup() renderEncoder.endEncoding() commandBuffer.present(currentDrawable) } bufferIndex = (bufferIndex + 1) % MBEMaxInflightBuffers commandBuffer.commit() }
Here's a small class that performs the essential functions of writing out a movie file that captures the contents of a Metal view: class MetalVideoRecorder { var isRecording = false var recordingStartTime = TimeInterval(0) private var assetWriter: AVAssetWriter private var assetWriterVideoInput: AVAssetWriterInput private var assetWriterPixelBufferInput: AVAssetWriterInputPixelBufferAdaptor init?(outputURL url: URL, size: CGSize) { do { assetWriter = try AVAssetWriter(outputURL: url, fileType: .m4v) } catch { return nil } let outputSettings: [String: Any] = [ AVVideoCodecKey : AVVideoCodecType.h264, AVVideoWidthKey : size.width, AVVideoHeightKey : size.height ] assetWriterVideoInput = AVAssetWriterInput(mediaType: .video, outputSettings: outputSettings) assetWriterVideoInput.expectsMediaDataInRealTime = true let sourcePixelBufferAttributes: [String: Any] = [ kCVPixelBufferPixelFormatTypeKey as String : kCVPixelFormatType_32BGRA, kCVPixelBufferWidthKey as String : size.width, kCVPixelBufferHeightKey as String : size.height ] assetWriterPixelBufferInput = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: assetWriterVideoInput, sourcePixelBufferAttributes: sourcePixelBufferAttributes) assetWriter.add(assetWriterVideoInput) } func startRecording() { assetWriter.startWriting() assetWriter.startSession(atSourceTime: .zero) recordingStartTime = CACurrentMediaTime() isRecording = true } func endRecording(_ completionHandler: @escaping () -> ()) { isRecording = false assetWriterVideoInput.markAsFinished() assetWriter.finishWriting(completionHandler: completionHandler) } func writeFrame(forTexture texture: MTLTexture) { if !isRecording { return } while !assetWriterVideoInput.isReadyForMoreMediaData {} guard let pixelBufferPool = assetWriterPixelBufferInput.pixelBufferPool else { print("Pixel buffer asset writer input did not have a pixel buffer pool available; cannot retrieve frame") return } var maybePixelBuffer: CVPixelBuffer? = nil let status = CVPixelBufferPoolCreatePixelBuffer(nil, pixelBufferPool, &maybePixelBuffer) if status != kCVReturnSuccess { print("Could not get pixel buffer from asset writer input; dropping frame...") return } guard let pixelBuffer = maybePixelBuffer else { return } CVPixelBufferLockBaseAddress(pixelBuffer, []) let pixelBufferBytes = CVPixelBufferGetBaseAddress(pixelBuffer)! // Use the bytes per row value from the pixel buffer since its stride may be rounded up to be 16-byte aligned let bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer) let region = MTLRegionMake2D(0, 0, texture.width, texture.height) texture.getBytes(pixelBufferBytes, bytesPerRow: bytesPerRow, from: region, mipmapLevel: 0) let frameTime = CACurrentMediaTime() - recordingStartTime let presentationTime = CMTimeMakeWithSeconds(frameTime, preferredTimescale: 240) assetWriterPixelBufferInput.append(pixelBuffer, withPresentationTime: presentationTime) CVPixelBufferUnlockBaseAddress(pixelBuffer, []) } } After initializing one of these and calling startRecording(), you can add a scheduled handler to the command buffer containing your rendering commands and call writeFrame (after you end encoding, but before presenting the drawable or committing the buffer): let texture = currentDrawable.texture commandBuffer.addCompletedHandler { commandBuffer in self.recorder.writeFrame(forTexture: texture) } When you're done recording, just call endRecording, and the video file will be finalized and closed. Caveats: This class assumes the source texture to be of the default format, .bgra8Unorm. If it isn't, you'll get crashes or corruption. If necessary, convert the texture with a compute or fragment shader, or use Accelerate. This class also assumes that the texture is the same size as the video frame. If this isn't the case (if the drawable size changes, or your screen autorotates), the output will be corrupted and you may see crashes. Mitigate this by scaling or cropping the source texture as your application requires.
Metal³
43,838,089
20
I've been learning Metal for iOS / OSX, and I began by following a Ray Wenderlich tutorial. This tutorial works fine but it makes no mention of MTLVertexAttributeDescriptors. Now that I'm developing my own app, I'm getting weird glitches and I'm wondering if the fact that I don't use MTLVertexAttributeDescriptors may be related to the problem. What difference do they make? I've been able to make a variety of shaders with varying vertex structures and I never even knew about these things. I know you use them to describe the layout of vertex components for use in a shader. For example a shader might use this structure for vertices, and it would be set up in a vertex descriptor in the function below. typedef struct { float3 position [[attribute(T_VertexAttributePosition)]]; float2 texCoord [[attribute(T_VertexAttributeTexcoord)]]; } Vertex; class func buildMetalVertexDescriptor() -> MTLVertexDescriptor { let mtlVertexDescriptor = MTLVertexDescriptor() mtlVertexDescriptor.attributes[T_VertexAttribute.position.rawValue].format = MTLVertexFormat.float3 mtlVertexDescriptor.attributes[T_VertexAttribute.position.rawValue].offset = 0 mtlVertexDescriptor.attributes[T_VertexAttribute.position.rawValue].bufferIndex = T_BufferIndex.meshPositions.rawValue mtlVertexDescriptor.attributes[T_VertexAttribute.texcoord.rawValue].format = MTLVertexFormat.float2 mtlVertexDescriptor.attributes[T_VertexAttribute.texcoord.rawValue].offset = 0 mtlVertexDescriptor.attributes[T_VertexAttribute.texcoord.rawValue].bufferIndex = T_BufferIndex.meshGenerics.rawValue mtlVertexDescriptor.layouts[T_BufferIndex.meshPositions.rawValue].stride = 12 mtlVertexDescriptor.layouts[T_BufferIndex.meshPositions.rawValue].stepRate = 1 mtlVertexDescriptor.layouts[T_BufferIndex.meshPositions.rawValue].stepFunction = MTLVertexStepFunction.perVertex mtlVertexDescriptor.layouts[T_BufferIndex.meshGenerics.rawValue].stride = 8 mtlVertexDescriptor.layouts[T_BufferIndex.meshGenerics.rawValue].stepRate = 1 mtlVertexDescriptor.layouts[T_BufferIndex.meshGenerics.rawValue].stepFunction = MTLVertexStepFunction.perVertex return mtlVertexDescriptor } But even without the MTLVertexDescriptor setup, the shader can already access the vertex buffer and the position / texCoord components of vertices in the array. Just by setting the vertex buffer, the shader has access to all of the components. So what good does the descriptor do?
There are, of course, multiple ways of doing things. The vertex descriptor is only used for one of them. For example, a vertex function might be declared like this: vertex MyVertexOut vertex_func(device const float3 *positions [[buffer(0)]], device const float2 *texCoords [[buffer(1)]], uint vid [[vertex_id]]) { // use positions[vid] and texCoords[vid] to fill in and return a MyVertexOut structure } This dictates that the vertex attributes be supplied in separate buffers, each of a specific layout. You could also do: struct MyVertexIn { float3 position; float2 texCoord; }; vertex MyVertexOut vertex_func(device const MyVertexIn *vertexes [[buffer(0)]], uint vid [[vertex_id]]) { // use vertexes[vid].position and vertexes[vid].texCoord to fill in and return a MyVertexOut structure } This dictates that the vertex attributes be supplied in a single buffer of structs matching the layout of MyVertexIn. Neither of the above require or make use of the vertex descriptor. It's completely irrelevant. However, you can also do this: struct MyVertexIn { float3 position [[attribute(0)]]; float2 texCoord [[attribute(1)]]; }; vertex MyVertexOut vertex_func(MyVertexIn vertex [[stage_in]]) { // use vertex.position and vertex.texCoord to fill in and return a MyVertexOut structure } Note the use of the attribute(n) and stage_in attributes. This does not dictate how the vertex attributes are supplied. Rather, the vertex descriptor describes a mapping from one or more buffers to the vertex attributes. The mapping can also perform conversions and expansions. For example, the shader code above specifies that the position field is a float3 but the buffers may contain (and be described as containing) half3 values (or various other types) and Metal will do the conversion automatically. The same shader can be used with different vertex descriptors and, thus, different distribution of vertex attributes across buffers. That provides flexibility for different scenarios, some where the vertex attributes are separated out into different buffers (similar to the first example I gave) and others where they're interleaved in the same buffer (similar to the second example). Etc. If you don't need that flexibility and the extra level of abstraction, then you don't need to deal with vertex descriptors. They're there for those who do need them.
Metal³
47,044,663
20
I am trying to create a framework that works with METAL Api (iOS). I am pretty new to this platform and I would like to know how to build the framework to work with .metal files (I am building a static lib, not dynamic). Should they be a part of the .a file, or as a resource files in the framework bundle? Or is there an other way to do that? Thanks. Update: For those who tackle this - I ended up following warrenm's 1's suggested option - converted the .metal file into a string and calling newLibraryWithSource:options:error:. Although it is not the best in performance it allowed me to ship only one framework file, without additional resources to import. That could be useful to whoever creating framework that uses Metal, ARKit, etc with shader files.
There are many ways to provide Metal shaders with a static library, all with different tradeoffs. I'll try to enumerate them here. 1) Transform your .metal files into static strings that are baked into your static library. This is probably the worst option. The idea is that you preprocess your Metal shader code into strings which are included as string literals in your static library. You would then use the newLibraryWithSource:options:error: API (or its asynchronous sibling) to turn the source into an MTLLibrary and retrieve the functions. This requires you to devise a process for doing the .metal-to-string conversion, and you lose the benefit of shader pre-compilation, making the resulting application slower. 2) Ship .metal files alongside your static library and require library users to add them to their app target All things considered, this is a decent option, though it places more of a burden on your users and exposes your Metal shader source (if that's a concern). Code in your static library can use the "default library" (newDefaultLibrary), since the code will be compiled automatically by Xcode into the app's default.metallib, which is embedded in the app bundle as a resource. 3) Ship a .metallib file alongside your static library This is a good middle ground between ease-of-use, performance, and security (since it doesn't expose your shader source, only its IR). Basically, you can create a "Metal Library" target in your project, into which you put your shader code. This will produce a .metallib file, which you can ship along with your static library and have your user embed as a resource in their app target. Your static library can load the .metallib at runtime with the newLibraryWithData:error: or newLibraryWithURL:error: API. Since your shaders will be pre-compiled, creating libraries will be faster, and you'll keep the benefit of compile-time diagnostics.
Metal³
46,742,403
19
I have an image that I generate programmatically and I want to send this image as a texture to a compute shader. The way I generate this image is that I calculate each of the RGBA components as UInt8 values, and combine them into a UInt32 and store it in the buffer of the image. I do this with the following piece of code: guard let cgContext = CGContext(data: nil, width: width, height: height, bitsPerComponent: 8, bytesPerRow: 0, space: CGColorSpaceCreateDeviceRGB(), bitmapInfo: RGBA32.bitmapInfo) else { print("Unable to create CGContext") return } guard let buffer = cgContext.data else { print("Unable to create textures") return } let pixelBuffer = buffer.bindMemory(to: RGBA32.self, capacity: width * height) let heightFloat = Float(height) let widthFloat = Float(width) for i in 0 ..< height { let latitude = Float(i + 1) / heightFloat for j in 0 ..< width { let longitude = Float(j + 1) / widthFloat let x = UInt8(((sin(longitude * Float.pi * 2) * cos(latitude * Float.pi) + 1) / 2) * 255) let y = UInt8(((sin(longitude * Float.pi * 2) * sin(latitude * Float.pi) + 1) / 2) * 255) let z = UInt8(((cos(latitude * Float.pi) + 1) / 2) * 255) let offset = width * i + j pixelBuffer[offset] = RGBA32(red: x, green: y, blue: z, alpha: 255) } } let coordinateConversionImage = cgContext.makeImage() where RGBA32 is a little struct that does the shifting and creating the UInt32 value. This image turns out fine as I can convert it to UIImage and save it to my photos library. The problem arises when I try to send this image as a texture to a compute shader. Below is my shader code: kernel void updateEnvironmentMap(texture2d<uint, access::read> currentFrameTexture [[texture(0)]], texture2d<uint, access::read> coordinateConversionTexture [[texture(1)]], texture2d<uint, access::write> environmentMap [[texture(2)]] uint2 gid [[thread_position_in_grid]]) { const uint4 pixel = {255, 127, 63, 255}; environmentMap.write(pixel, gid); } The problem with this code is that the type of my textures is uint, which is 32-bits, and I want to generate 32-bit pixels the same way I do on the CPU, by appending 4 8-bit values. However, I can't seem to do that on Metal as there is no byte type that I can just append together and make up a uint32. So, my question is, what is the correct way to handle 2D textures and set 32-bit pixels on a Metal compute shader? Bonus question: Also, I've seen example shader codes with texture2d<float, access::read> as the input texture type. I'm assuming it represents a value between 0.0 and 1.0 but what advantage that does that have over an unsigned int with values between 0 and 255? Edit: To clarify, the output texture of the shader, environmentMap, has the exact same properties (width, height, pixelFormat, etc.) as the input textures. Why I think this is counter intuitive is that we are setting a uint4 as a pixel, which means it's composed of 4 32-bit values, whereas each pixel should be 32-bits. With this current code, {255, 127, 63, 255} has the exact same result as {2550, 127, 63, 255}, meaning the values somehow get clamped between 0-255 before being written to the output texture. But this is extremely counter-intuitive.
There's a bit more magic at play than you seem to be familiar with, so I'll try to elucidate. First of all, by design, there is a loose connection between the storage format of textures in Metal and the type you get when you read/sample. You can have a texture in .bgra8Unorm format that, when sampled through a texture bound as texture2d<float, access::sample> will give you a float4 with its components in RGBA order. The conversion from those packed bytes to the float vector with swizzled components follows well-documented conversion rules as specified in the Metal Shading Language Specification. It is also the case that, when writing to a texture whose storage is (for example) 8 bits per component, values will be clamped to fit in the underlying storage format. This is further affected by whether or not the texture is a norm type: if the format contains norm, the values are interpreted as if they specified a value between 0 and 1. Otherwise, the values you read are not normalized. An example: if a texture is .bgra8Unorm and a given pixel contains the byte values [0, 64, 128, 255], then when read in a shader that requests float components, you will get [0.5, 0.25, 0, 1.0] when you sample it. By contrast, if the format is .rgba8Uint, you will get [0, 64, 128, 255]. The storage format of the texture has a prevailing effect on how its contents get interpreted upon sampling. I assume that the pixel format of your texture is something like .rgba8Unorm. If that's the case, you can achieve what you want by writing your kernel like this: kernel void updateEnvironmentMap(texture2d<float, access::read> currentFrameTexture [[texture(0)]], texture2d<float, access::read> coordinateConversionTexture [[texture(1)]], texture2d<float, access::write> environmentMap [[texture(2)]] uint2 gid [[thread_position_in_grid]]) { const float4 pixel(255, 127, 63, 255); environmentMap.write(pixel * (1 / 255.0), gid); } By contrast, if your texture has a format of .rgba8Uint, you'll get the same effect by writing it like this: kernel void updateEnvironmentMap(texture2d<float, access::read> currentFrameTexture [[texture(0)]], texture2d<float, access::read> coordinateConversionTexture [[texture(1)]], texture2d<float, access::write> environmentMap [[texture(2)]] uint2 gid [[thread_position_in_grid]]) { const float4 pixel(255, 127, 63, 255); environmentMap.write(pixel, gid); } I understand that this is a toy example, but I hope that with the foregoing information, you can figure out how to correctly store and sample values to achieve what you want.
Metal³
47,738,441
18
import UIKit import Metal import QuartzCore class ViewController: UIViewController { var device: MTLDevice! = nil var metalLayer: CAMetalLayer! = nil override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view, typically from a nib. device = MTLCreateSystemDefaultDevice() metalLayer = CAMetalLayer() // 1 metalLayer.device = device // 2 metalLayer.pixelFormat = .BGRA8Unorm // 3 metalLayer.framebufferOnly = true // 4 metalLayer.frame = view.layer.frame // 5 view.layer.addSublayer(metalLayer) // 6 } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. } } When I have this in my ViewController.swift, I get the error "Use of undeclared type CAMetalLayer" even though I've imported Metal and QuartzCore. How can I get this code to work?
UPDATE: Simulator support is coming this year (2019) Pre Xcode 11/iOS 13: Metal code doesn't compile on the Simulator. Try compiling for a device.
Metal³
32,917,630
17
I tried using Metal in a simple app but when I call the device.newDefaultLibrary() function then I get an error in runtime: /BuildRoot/Library/Caches/com.apple.xbs/Sources/Metal/Metal-56.7/Framework/MTLLibrary.mm:1842: failed assertion `Metal default library not found' Has anyone any idea what cloud be the problem? I followed this tutorial. The code is a little old but with tiny changes it work. Here is my ViewController code: import UIKit import Metal import QuartzCore class ViewController: UIViewController { //11A var device: MTLDevice! = nil //11B var metalLayer: CAMetalLayer! = nil //11C let vertexData:[Float] = [ 0.0, 1.0, 0.0, -1.0, -1.0, 0.0, 1.0, -1.0, 0.0] var vertexBuffer: MTLBuffer! = nil //11F var pipelineState: MTLRenderPipelineState! = nil //11G var commandQueue: MTLCommandQueue! = nil //12A var timer: CADisplayLink! = nil override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view, typically from a nib. //11A device = MTLCreateSystemDefaultDevice() //11B metalLayer = CAMetalLayer() // 1 metalLayer.device = device // 2 metalLayer.pixelFormat = .BGRA8Unorm // 3 metalLayer.framebufferOnly = true // 4 metalLayer.frame = view.layer.frame // 5 view.layer.addSublayer(metalLayer) // 6 //11C let dataSize = vertexData.count * sizeofValue(vertexData[0]) // 1 vertexBuffer = device.newBufferWithBytes(vertexData, length: dataSize, options: MTLResourceOptions.CPUCacheModeDefaultCache) // 2 //11F // 1 let defaultLibrary = device.newDefaultLibrary() //The error is generating here let fragmentProgram = defaultLibrary!.newFunctionWithName("basic_fragment") let vertexProgram = defaultLibrary!.newFunctionWithName("basic_vertex") // 2 let pipelineStateDescriptor = MTLRenderPipelineDescriptor() pipelineStateDescriptor.vertexFunction = vertexProgram pipelineStateDescriptor.fragmentFunction = fragmentProgram pipelineStateDescriptor.colorAttachments[0].pixelFormat = .BGRA8Unorm // 3 do { try pipelineState = device.newRenderPipelineStateWithDescriptor(pipelineStateDescriptor) } catch _ { print("Failed to create pipeline state, error") } //11G commandQueue = device.newCommandQueue() //12A timer = CADisplayLink(target: self, selector: Selector("gameloop")) timer.addToRunLoop(NSRunLoop.mainRunLoop(), forMode: NSDefaultRunLoopMode) } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. } //MARK: Custom Methodes //12A func render() { //12C let commandBuffer = commandQueue.commandBuffer() //12B let drawable = metalLayer.nextDrawable() let renderPassDescriptor = MTLRenderPassDescriptor() renderPassDescriptor.colorAttachments[0].texture = drawable!.texture renderPassDescriptor.colorAttachments[0].loadAction = .Clear renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColor(red: 0.0, green: 104.0/255.0, blue: 5.0/255.0, alpha: 1.0) //12D let renderEncoderOpt = commandBuffer.renderCommandEncoderWithDescriptor(renderPassDescriptor) renderEncoderOpt.setRenderPipelineState(pipelineState) renderEncoderOpt.setVertexBuffer(vertexBuffer, offset: 0, atIndex: 0) renderEncoderOpt.drawPrimitives(.Triangle, vertexStart: 0, vertexCount: 3, instanceCount: 1) renderEncoderOpt.endEncoding() //12E commandBuffer.presentDrawable(drawable!) commandBuffer.commit() } func gameloop() { autoreleasepool { self.render() } } } I use an iPhone 5s device with iOS 9.3 for testing.
The default library is only included in your app when you have at least one .metal file in your app target's Compile Sources build phase. I assume you've followed the steps of the tutorial where you created the Metal shader source file and added the vertex and fragment functions, so you simply need to use the + icon in the build phases setting to add that file to your compilation phase:
Metal³
36,204,360
16
I'm doing realtime video processing on iOS at 120 fps and want to first preprocess image on GPU (downsample, convert color, etc. that are not fast enough on CPU) and later postprocess frame on CPU using OpenCV. What's the fastest way to share camera feed between GPU and CPU using Metal? In other words the pipe would look like: CMSampleBufferRef -> MTLTexture or MTLBuffer -> OpenCV Mat I'm converting CMSampleBufferRef -> MTLTexture the following way CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); // textureRGBA { size_t width = CVPixelBufferGetWidth(pixelBuffer); size_t height = CVPixelBufferGetHeight(pixelBuffer); MTLPixelFormat pixelFormat = MTLPixelFormatBGRA8Unorm; CVMetalTextureRef texture = NULL; CVReturn status = CVMetalTextureCacheCreateTextureFromImage(NULL, _textureCache, pixelBuffer, NULL, pixelFormat, width, height, 0, &texture); if(status == kCVReturnSuccess) { textureBGRA = CVMetalTextureGetTexture(texture); CFRelease(texture); } } After my metal shader is finised I convert MTLTexture to OpenCV cv::Mat image; ... CGSize imageSize = CGSizeMake(drawable.texture.width, drawable.texture.height); int imageByteCount = int(imageSize.width * imageSize.height * 4); int mbytesPerRow = 4 * int(imageSize.width); MTLRegion region = MTLRegionMake2D(0, 0, int(imageSize.width), int(imageSize.height)); CGSize resSize = CGSizeMake(drawable.texture.width, drawable.texture.height); [drawable.texture getBytes:image.data bytesPerRow:mbytesPerRow fromRegion:region mipmapLevel:0]; Some observations: 1) Unfortunately MTLTexture.getBytes seems expensive (copying data from GPU to CPU?) and takes around 5ms on my iphone 5S which is too much when processing at ~100fps 2) I noticed some people use MTLBuffer instead of MTLTexture with the following method: metalDevice.newBufferWithLength(byteCount, options: .StorageModeShared) (see: Memory write performance - GPU CPU Shared Memory) However CMSampleBufferRef and accompanying CVPixelBufferRef is managed by CoreVideo is guess.
The fastest way to do this is to use a MTLTexture backed by a MTLBuffer; it is a special kind of MTLTexture that shares memory with a MTLBuffer. However, your C processing (openCV) will be running a frame or two behind, this is unavoidable as you need to submit the commands to the GPU (encoding) and the GPU needs to render it, if you use waitUntilCompleted to make sure the GPU is finished that just chews up the CPU and is wasteful. So the process would be: first you create the MTLBuffer then you use the MTLBuffer method "newTextureWithDescriptor:offset:bytesPerRow:" to create the special MTLTexture. You need to create the special MTLTexture beforehand (as an instance variable), then you need to setup up a standard rendering pipeline (faster than using compute shaders) that will take the MTLTexture created from the CMSampleBufferRef and pass this into your special MTLTexture, in that pass you can downscale and do any colour conversion as necessary in one pass. Then you submit the command buffer to the gpu, in a subsequent pass you can just call [theMTLbuffer contents] to grab the pointer to the bytes that back your special MTLTexture for use in openCV. Any technique that forces a halt in the CPU/GPU behaviour will never be efficient as half the time will be spent waiting i.e. the CPU waits for the GPU to finish and the GPU has to wait also for the next encodings (when the GPU is working you want the CPU to be encoding the next frame and doing any openCV work rather than waiting for the GPU to finish). Also, when people normally refer to real-time processing they usually are referring to some processing with real-time feedback (visual), all modern iOS devices from the 4s and above have a 60Hz screen refresh rate, so any feedback presented faster than that is pointless but if you need 2 frames (at 120Hz) to make 1 (at 60Hz) then you have to have a custom timer or modify CADisplayLink.
Metal³
37,639,271
16
When working with Metal, I find there's a bewildering number of types and it's not always clear to me which type I should be using in which context. In Apple's Metal Shading Language Specification, there's a pretty clear table of which types are supported within a Metal shader file. However, there's plenty of sample code available that seems to use additional types that are part of SIMD. On the macOS (Objective-C) side of things, the Metal types are not available but the SIMD ones are and I'm not sure which ones I'm supposed to be used. For example: In the Metal Spec, there's float2 that is described as a "vector" data type representing two floating components. On the app side, the following all seem to be used or represented in some capacity: float2, which is typedef ::simd_float2 float2 in vector_types.h Noted: "In C or Objective-C, this type is available as simd_float2." vector_float2, which is typedef simd_float2 vector_float2 Noted: "This type is deprecated; you should use simd_float2 or simd::float2 instead" simd_float2, which is typedef __attribute__((__ext_vector_type__(2))) float simd_float2 ::simd_float2 and simd::float2 ? A similar situation exists for matrix types: matrix_float4x4, simd_float4x4, ::simd_float4x4 and float4x4, Could someone please shed some light on why there are so many typedefs with seemingly overlapping functionality? If you were writing a new application today (2018) in Objective-C / Objective-C++, which type should you use to represent two floating values (x/y) and which type for matrix transforms that can be shared between app code and Metal?
The types with vector_ and matrix_ prefixes have been deprecated in favor of those with the simd_ prefix, so the general guidance (using float4 as an example) would be: In C code, use the simd_float4 type. (You have to include the prefix unless you provide your own typedef, since C doesn't have namespaces.) Same for Objective-C. In C++ code, use the simd::float4 type, which you can shorten to float4 by using namespace simd;. Same for Objective-C++. In Metal code, use the float4 type, since float4 is a fundamental type in the Metal Shading Language [1]. In Swift code, use the float4 type, since the simd_ types are typealiased to shorter names. Update: In Swift 5, float4 and related types have been deprecated in favor of SIMD4<Float> and related types. These types are all fundamentally equivalent, and all have the same size and alignment characteristics so you can use them across languages. That is, in fact, one of the design goals of the simd framework. I'll leave a discussion of packed types to another day, since you didn't ask. [1] Metal is an unusual case since it defines float4 in the global namespace, then imports it into the metal namespace, which is also exported as the simd namespace. It additionally aliases float4 as vector_float4. So, you can use any of the above names for this vector type (except simd_float4). Prefer float4.
Metal³
51,790,490
16
On 18th May 2022, PyTorch announced support for GPU-accelerated PyTorch training on Mac. I followed the following process to set up PyTorch on my Macbook Air M1 (using miniconda). conda create -n torch-nightly python=3.8 $ conda activate torch-nightly $ pip install --pre torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/cpu I am trying to execute a script from Udacity's Deep Learning Course available here. The script moves the models to GPU using the following code: G.cuda() D.cuda() However, this will not work on M1 chips, since there is no CUDA. If we want to move models to M1 GPU and our tensors to M1 GPU, and train entirely on M1 GPU, what should we be doing? If Relevant: G and D are Discriminator and Generators for GAN's. class Discriminator(nn.Module): def __init__(self, conv_dim=32): super(Discriminator, self).__init__() self.conv_dim = conv_dim # complete init function self.cv1 = conv(in_channels=3, out_channels=conv_dim, kernel_size=4, stride=2, padding=1, batch_norm=False) # 32*32*3 -> 16*16*32 self.cv2 = conv(in_channels=conv_dim, out_channels=conv_dim*2, kernel_size=4, stride=2, padding=1, batch_norm=True) # 16*16*32 -> 8*8*64 self.cv3 = conv(in_channels=conv_dim*2, out_channels=conv_dim*4, kernel_size=4, stride=2, padding=1, batch_norm=True) # 8*8*64 -> 4*4*128 self.fc1 = nn.Linear(in_features = 4*4*conv_dim*4, out_features = 1, bias=True) def forward(self, x): # complete forward function out = F.leaky_relu(self.cv1(x), 0.2) out = F.leaky_relu(self.cv2(x), 0.2) out = F.leaky_relu(self.cv3(x), 0.2) out = out.view(-1, 4*4*conv_dim*4) out = self.fc1(out) return out D = Discriminator(conv_dim) class Generator(nn.Module): def __init__(self, z_size, conv_dim=32): super(Generator, self).__init__() self.conv_dim = conv_dim self.z_size = z_size # complete init function self.fc1 = nn.Linear(in_features = z_size, out_features = 4*4*conv_dim*4) self.dc1 = deconv(in_channels = conv_dim*4, out_channels = conv_dim*2, kernel_size=4, stride=2, padding=1, batch_norm=True) self.dc2 = deconv(in_channels = conv_dim*2, out_channels = conv_dim, kernel_size=4, stride=2, padding=1, batch_norm=True) self.dc3 = deconv(in_channels = conv_dim, out_channels = 3, kernel_size=4, stride=2, padding=1, batch_norm=False) def forward(self, x): # complete forward function x = self.fc1(x) x = x.view(-1, conv_dim*4, 4, 4) x = F.relu(self.dc1(x)) x = F.relu(self.dc2(x)) x = F.tanh(self.dc3(x)) return x G = Generator(z_size=z_size, conv_dim=conv_dim)
This is what I used: if torch.backends.mps.is_available(): mps_device = torch.device("mps") G.to(mps_device) D.to(mps_device) Similarly for all tensors that I want to move to M1 GPU, I used: tensor_ = tensor_(mps_device) Some operations are ot yet implemented using MPS, and we might need to set a few environment variables to use CPU fall back instead: One error that I faced during executing the script was # NotImplementedError: The operator 'aten::_slow_conv2d_forward' is not current implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS. To solve it I set the environment variable PYTORCH_ENABLE_MPS_FALLBACK=1 conda env config vars set PYTORCH_ENABLE_MPS_FALLBACK=1 conda activate <test-env> References: https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/ https://pytorch.org/docs/master/notes/mps.html https://sebastianraschka.com/blog/2022/pytorch-m1-gpu.html https://sebastianraschka.com/blog/2022/pytorch-m1-gpu.html https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#setting-environment-variables
Metal³
72,416,726
15
I'd like to build a dissolve in effect for a Scenekit game. I've been looking into shader modifiers since they seem to be the most light weight and haven't had any luck in replicating this effect: Is it possible to use shader modifiers to create this effect? How would you go about implementing one?
You can get pretty close to the intended effect with a fragment shader modifier. The basic approach is as follows: Sample from a noise texture If the noise sample is below a certain threshold (which I call "revealage"), discard it, making it fully transparent Otherwise, if the fragment is close to the edge, replace its color with your preferred edge color (or gradient) Apply bloom to make the edges glow Here's the shader modifier code for doing this: #pragma arguments float revealage; texture2d<float, access::sample> noiseTexture; #pragma transparent #pragma body const float edgeWidth = 0.02; const float edgeBrightness = 2; const float3 innerColor = float3(0.4, 0.8, 1); const float3 outerColor = float3(0, 0.5, 1); const float noiseScale = 3; constexpr sampler noiseSampler(filter::linear, address::repeat); float2 noiseCoords = noiseScale * _surface.ambientTexcoord; float noiseValue = noiseTexture.sample(noiseSampler, noiseCoords).r; if (noiseValue > revealage) { discard_fragment(); } float edgeDist = revealage - noiseValue; if (edgeDist < edgeWidth) { float t = edgeDist / edgeWidth; float3 edgeColor = edgeBrightness * mix(outerColor, innerColor, t); _output.color.rgb = edgeColor; } Notice that the revealage parameter is exposed as a material parameter, since you might want to animate it. There are other internal constants, such as edge width and noise scale that can be fine-tuned to get the desired effect with your content. Different noise textures produce different dissolve effects, so you can experiment with that as well. I just used this multioctave value noise image: Load the image as a UIImage or NSImage and set it on the material property that gets exposed as noiseTexture: material.setValue(SCNMaterialProperty(contents: noiseImage), forKey: "noiseTexture") You'll need to add bloom as a post-process to get that glowy, e-wire effect. In SceneKit, this is as simple as enabling the HDR pipeline and setting some parameters: let camera = SCNCamera() camera.wantsHDR = true camera.bloomThreshold = 0.8 camera.bloomIntensity = 2 camera.bloomBlurRadius = 16.0 camera.wantsExposureAdaptation = false All of the numeric parameters will potentially need to be tuned to your content. To keep things tidy, I prefer to keep shader modifiers in their own text files (I named mine "dissolve.fragment.txt"). Here's how to load some modifier code and attach it to a material. let modifierURL = Bundle.main.url(forResource: "dissolve.fragment", withExtension: "txt")! let modifierString = try! String(contentsOf: modifierURL) material.shaderModifiers = [ SCNShaderModifierEntryPoint.fragment : modifierString ] And finally, to animate the effect, you can use a CABasicAnimation wrapped with a SCNAnimation: let revealAnimation = CABasicAnimation(keyPath: "revealage") revealAnimation.timingFunction = CAMediaTimingFunction(name: .linear) revealAnimation.duration = 2.5 revealAnimation.fromValue = 0.0 revealAnimation.toValue = 1.0 let scnRevealAnimation = SCNAnimation(caAnimation: revealAnimation) material.addAnimation(scnRevealAnimation, forKey: "Reveal")
Metal³
54,562,128
14
I'm having trouble rendering semitransparent sprites in Metal. I have read this question, and this question, and this one, and this thread on Apple's forums, and several more, but can't quite get it to work, so please read on before marking this question as a duplicate. My reference texture has four rows and four columns. The rows are fully-saturated red, green, blue and black, respectively. The columns vary in opacity from 100% opaque to 25% opaque (1, 0.75, 0.5, 0.25 alpha, in that order). On Pixelmator (where I created it), it looks like this: If I insert a fully opaque white background before exporting it, it will look like this: ...However, when I texture-map it onto a quad in Metal, and render that after clearing the background to opaque white (255, 255, 255, 255), I get this: ...which is clearly darker than it should be in the non-opaque fragments (the bright white behind should "bleed through"). Implementation Details I imported the png file into Xcode as a texture asset in my app's asset catalog, and at runtime, I load it using MTKTextureLoader. The .SRGB option doesn't seem to make a difference. The shader code is not doing anything fancy as far as I can tell, but for reference: #include <metal_stdlib> using namespace metal; struct Constants { float4x4 modelViewProjection; }; struct VertexIn { float4 position [[ attribute(0) ]]; float2 texCoords [[ attribute(1) ]]; }; struct VertexOut { float4 position [[position]]; float2 texCoords; }; vertex VertexOut sprite_vertex_transform(device VertexIn *vertices [[buffer(0)]], constant Constants &uniforms [[buffer(1)]], uint vertexId [[vertex_id]]) { float4 modelPosition = vertices[vertexId].position; VertexOut out; out.position = uniforms.modelViewProjection * modelPosition; out.texCoords = vertices[vertexId].texCoords; return out; } fragment float4 sprite_fragment_textured(VertexOut fragmentIn [[stage_in]], texture2d<float, access::sample> tex2d [[texture(0)]], constant Constants &uniforms [[buffer(1)]], sampler sampler2d [[sampler(0)]]) { float4 surfaceColor = tex2d.sample(sampler2d, fragmentIn.texCoords); return surfaceColor; } On the app side, I am using the following (pretty standard) blend factors and operations on my render pass descriptor: descriptor.colorAttachments[0].rgbBlendOperation = .add descriptor.colorAttachments[0].alphaBlendOperation = .add descriptor.colorAttachments[0].sourceRGBBlendFactor = .one descriptor.colorAttachments[0].sourceAlphaBlendFactor = .sourceAlpha descriptor.colorAttachments[0].destinationRGBBlendFactor = .oneMinusSourceAlpha descriptor.colorAttachments[0].destinationAlphaBlendFactor = .oneMinusSourceAlpha (I have tried changing the sourceRGBBlendFactor from .one to .sourceAlpha makes it a bit darker.) If I render the image on a red background (255, 0, 0, 255) instead, I get this: Notice how the top row gets gradually darker towards the right. It should be the same color all along since it is blending two colors that have the same RGB component (255, 0, 0). I have stripped my app to its bare minimum and put a demo project on Github; The full Metal setup can be seen in the repository's source code. Perhaps there's something I didn't mention that is causing this, but can't quite figure out what... Edit: As suggested by @KenThomases in the comments, I changed the value of the MTKView property colorPixelFormat from the default of .bgra8Unorm to bgra8Unorm_srgb, and set the colorSpace property to the same as view.window?.colorSpace?.cgColorSpace. Now, the semitransparent fragments look much less dark, but still not the expected color: (The top row should be completely 'invisible' against the red background, left to right.) Addendum I came up across Apple's docs on using the Shader Debugger, so I decided to take a look at what happens in the fragment shader when my app draws one of the top-right fragments of the sprite (which is suposed to be fully-saturated red at 25% opacity). Interestingly enough, the value returned from the fragment shader (to which alpha blending will be then applied, based on the color buffer's current color and the blend factors/functions) is [0.314, 0.0, 0.0, 0.596]: This RGBA value seems to be completely unaffected by whether MTKTextureLoader.Option.SRGB is true, false, or absent. Notice that the red component (0.314) and the alpha component (0.596) are not equal, although (if I'm not mistaken) they should be, for a fully-saturated red with premultiplied alpha. I guess this means I've narrowed my issue down to the texture loading stage...? Perhaps I should abandon the convenient MTKTextureLoader and get my hands dirty...?
Well, it turns out the problem was indeed in the texture loading stage, but not in any piece of code that I could possibly tweak (at least not if sticking to MTKTextureLoader). It seems that I needed to introduce some changes to the Attributes Inspector of my asset catalog in Xcode (But at least now I get to tag my original question with Xcode: One step closer to the bronze badge!). Specifically, I had to change the texture set's Interpretation attribute from the default option of "Colors" to "Colors (non-premultiplied)": Cleary, these asset catalog texture sets where designed with more traditional texture image formats in mind such as e.g. TGA, an not PNG (which is officially non-premultiplied, as per the specification). I somehow expected that MTKTextureLoader would be smart enough to do this for me at load time. Evidently, it is not a piece of information that can be reliably read from (e.g.) a PNG file's metadata/header. Now, my reference texture is rendered in all its bright glory: As a final, more rigorous test, I can confirm that all 4 colors "disappear" over an equivalent RGB background, regardless of the texels' opacities:
Metal³
55,604,226
14
In Metal what coordinate system to use inside shader (in and out)? And when we render to texture is it the same? With the z buffer also? Are there any inconsistencies? Finally what are the difference between Metal, OpenGL and DirectX ?
Metal Coordinate Systems Metal defines several standard coordinate systems to represent transformed graphics data at different stages along the rendering pipeline. 1) NDC (Normalized Device Coordinate): this coordinates is used by developers to construct their geometries and transform the geometries in vertex shader via model and view matrices. Point(-1, -1) in NDC is located at the the bottom left corner (Y up).. 2) Framebuffer Coordinate (Viewport coordinate): when we write into attachment or read from attachment or copy/blit between attachments, we use framebuffer coordiante to specify the location. The origin(0, 0) is located at the top-left corner (Y down). 3) Texture Coordinate: when we upload texture into memory or sample from texture, we use texture coordinate. The origin(0, 0) is located at the top-left corner (Y down). D3D12 and Metal NDC: +Y is up. Point(-1, -1) is at the bottom left corner. Framebuffer coordinate: +Y is down. Origin(0, 0) is at the top left corner. Texture coordinate: +Y is down. Origin(0, 0) is at the top left corner. OpenGL, OpenGL ES and WebGL NDC: +Y is up. Point(-1, -1) is at the bottom left corner. Framebuffer coordinate: +Y is up. Origin(0, 0) is at the bottom left corner. Texture coordinate: +Y is up. Origin(0, 0) is at the bottom left corner. Vulkan NDC: +Y is down. Point(-1, -1) is at the top left corner. Framebuffer coordinate: +Y is down. Origin(0, 0) is at the top left corner. Texture coordinate: +Y is up. Origin(0, 0) is at the bottom left corner.
Metal³
58,702,023
14
It has been a while since I have used XCode, but since Apple revieled XCode 6 and the new Metal API I had to check it out. They have released examples, e.g. a converted version of their Basic3D example. I am having problems making them compile and run, though. I had to add paths the Quatz and Metal frameworks for compilation and linking to work - thought that should have worked out of the box with build-in libraries? Also the metal shader compilation step fails, it can not find the metal compiler. Where in XCode is that path setup? If I ignore the shader compilation step I get an error that the application will not run on any of the simulators. Can metal based 3D applications only run on actual hardware and not in simulator mode?
According to a staff on Apple dev forums, Metal does not run in the simulator. See: https://devforums.apple.com/message/971605#971605 If you look at other samples/app templates there is no need to explicitly link any framework, indeed everything runs out of the box.
Metal³
24,046,125
13
Usually, I use the below code to identify the iOS version of the device. if ([[UIDevice currentDevice].systemVersion floatValue] >= 8.0) In a similar way, I'm trying to find Metal support for the device. Metal is supported for Apple devices with the A7 (or better) GPU and iOS 8.0. This is the way I expect my code to work: if (MetalSupported == true) { // metal programming } else { // opengles2 programming } How do I get the value for the Boolean variable MetalSupported ?
It's good that you're looking for something specific to Metal — generally, iOS version checks and hardware name checks are fragile, because they rely on your app knowing all of the OS versions and devices that could ever run it. If Apple were to go back and release an iOS 7.x version that added Metal support (okay, seems unlikely), or a device that supports Metal but isn't one of the hardware names you're looking at (seems much more likely), you'd be stuck having to track all of those things down and update your app to manage them. Anyway, the best way to check whether the device you're running on is Metal enough for your awesome graphics code? Just try to get a MTLDevice object: id<MTLDevice> device = MTLCreateSystemDefaultDevice(); if (device) { // ready to rock 🤘 } else { // back to OpenGL } Note that just testing for the presence of a Metal framework class doesn't help — those classes are there on any device running iOS 8 (all the way back to iPhone 4s & iPad 2), regardless of whether that device has a Metal-capable GPU. In Simulator, Metal is supported as of iOS 13 / tvOS 13 when running on macOS 10.15. Use the same strategy: call MTLCreateSystemDefaultDevice(). If it returns an object then your simulator code is running in an environment where the simulator is hardware-accelerated. If it returns nil then you're running on an older simulator or in an environment where Metal is not available.
Metal³
29,790,663
13
I have a MTLTexture containing 16bit unsigned integers (MTLPixelFormatR16Uint). The values range from about 7000 to 20000, with 0 being used as a 'nodata' value, which is why it is skipped in the code below. I'd like to find the minimum and maximum values so I can rescale these values between 0-255. Ultimately I'll be looking to base the minimum and maximum values on a histogram of the data (it has some outliers), but for now I'm stuck on simply extracting the min/max. I can read the data from the GPU to CPU and pull the min/max values out but would prefer to perform this task on the GPU. First attempt The command encoder is dispatched with 16x16 threads per thread group, the number of thread groups is based on the texture size (eg; width = textureWidth / 16, height = textureHeight / 16). typedef struct { atomic_uint min; atomic_uint max; } BandMinMax; kernel void minMax(texture2d<ushort, access::read> band1 [[texture(0)]], device BandMinMax &out [[buffer(0)]], uint2 gid [[thread_position_in_grid]]) { ushort value = band1.read(gid).r; if (value != 0) { uint currentMin = atomic_load_explicit(&out.min, memory_order_relaxed); uint currentMax = atomic_load_explicit(&out.max, memory_order_relaxed); if (value > currentMax) { atomic_store_explicit(&out.max, value, memory_order_relaxed); } if (value < currentMin) { atomic_store_explicit(&out.min, value, memory_order_relaxed); } } } From this I get a minimum and maximum value, but for the same dataset the min and max will often return different values. Fairly certain this is the min and max from a single thread when there are multiple threads running. Second attempt Building on the previous attempt, this time I'm storing the individual min/max values from each thread, all 256 (16x16). kernel void minMax(texture2d<ushort, access::read> band1 [[texture(0)]], device BandMinMax *out [[buffer(0)]], uint2 gid [[thread_position_in_grid]], uint tid [[ thread_index_in_threadgroup ]]) { ushort value = band1.read(gid).r; if (value != 0) { uint currentMin = atomic_load_explicit(&out[tid].min, memory_order_relaxed); uint currentMax = atomic_load_explicit(&out[tid].max, memory_order_relaxed); if (value > currentMax) { atomic_store_explicit(&out[tid].max, value, memory_order_relaxed); } if (value < currentMin) { atomic_store_explicit(&out[tid].min, value, memory_order_relaxed); } } } This returns an array containing 256 sets of min/max values. From these I guess I could find the lowest of the minimum values, but this seems like a poor approach. Would appreciate a pointer in the right direction, thanks!
The Metal Shading Language has atomic compare-and-swap functions you can use to compare the existing value at a memory location with a value, and replace the value at that location if they don't compare equal. With these, you can create a set of atomic compare-and-replace-if-[greater|less]-than operations: static void atomic_uint_exchange_if_less_than(volatile device atomic_uint *current, uint candidate) { uint val; do { val = *((device uint *)current); } while ((candidate < val || val == 0) && !atomic_compare_exchange_weak_explicit(current, &val, candidate, memory_order_relaxed, memory_order_relaxed)); } static void atomic_uint_exchange_if_greater_than(volatile device atomic_uint *current, uint candidate) { uint val; do { val = *((device uint *)current); } while (candidate > val && !atomic_compare_exchange_weak_explicit(current, &val, candidate, memory_order_relaxed, memory_order_relaxed)); } To apply these, you might create a buffer that contains one interleaved min, max pair per threadgroup. Then, in the kernel function, read from the texture and conditionally write the min and max values: kernel void min_max_per_threadgroup(texture2d<ushort, access::read> texture [[texture(0)]], device uint *mapBuffer [[buffer(0)]], uint2 tpig [[thread_position_in_grid]], uint2 tgpig [[threadgroup_position_in_grid]], uint2 tgpg [[threadgroups_per_grid]]) { ushort val = texture.read(tpig).r; device atomic_uint *atomicBuffer = (device atomic_uint *)mapBuffer; atomic_uint_exchange_if_less_than(atomicBuffer + ((tgpig[1] * tgpg[0] + tgpig[0]) * 2), val); atomic_uint_exchange_if_greater_than(atomicBuffer + ((tgpig[1] * tgpg[0] + tgpig[0]) * 2) + 1, val); } Finally, run a separate kernel to reduce over this buffer and collect the final min, max values across the entire texture: kernel void min_max_reduce(constant uint *mapBuffer [[buffer(0)]], device uint *reduceBuffer [[buffer(1)]], uint2 tpig [[thread_position_in_grid]]) { uint minv = mapBuffer[tpig[0] * 2]; uint maxv = mapBuffer[tpig[0] * 2 + 1]; device atomic_uint *atomicBuffer = (device atomic_uint *)reduceBuffer; atomic_uint_exchange_if_less_than(atomicBuffer, minv); atomic_uint_exchange_if_greater_than(atomicBuffer + 1, maxv); } Of course, you can only reduce over the total allowed thread execution width of the device (~256), so you may need to do the reduction in multiple passes, with each one reducing the size of the data to be operated on by a factor of the maximum thread execution width. Disclaimer: This may not be the best technique, but it does appear to be correct in my limited testing of an OS X implementation. It was marginally faster than a naive CPU implementation on a 256x256 texture on Intel Iris Pro, but substantially slower on an Nvidia GT 750M (because of dispatch overhead).
Metal³
36,663,645
13
In Metal what is the difference between a packed_float4 and a float4?
This information is from here float4 has an alignment of 16 bytes. This means that the memory address of such a type (e.g. 0x12345670) will be divisible by 16 (aka the last hexadecimal digit is 0). packed_float4 on the other hand has an alignment of 4 bytes. Last digit of the address will be 0, 4, 8 or c This does matter when you create custom structs. Say you want a struct with 2 normal floats and 1 float4/packed_float4: struct A{ float x, y; float4 z; } struct B{ float x, y; packed_float4 z; } For A: The alignment of float4 has to be 16 and since float4 has to be after the normal floats, there is going to be 8 bytes of empty space between y and z. Here is what A looks like in memory: Address | 0x200 | 0x204 | 0x208 | 0x20c | 0x210 | 0x214 | 0x218 | 0x21c | Content | x | y | - | - | z1 | z2 | z3 | z4 | ^Has to be 16 byte aligned For B: Alignment of packed_float4 is 4, the same as float, so it can follow right after the floats in any case: Address | 0x200 | 0x204 | 0x208 | 0x20c | 0x210 | 0x214 | Content | x | y | z1 | z2 | z3 | z4 | As you can see, A takes up 32 bytes whereas B only uses 24 bytes. When you have an array of those structs, A will take up 8 more bytes for every element. So for passing around a lot of data, the latter is preferred. The reason you need float4 at all is because the GPU can't handle 4 byte aligned packed_float4s, you won't be able to return packed_float4 in a shader. This is because of performance I assume. One last thing: When you declare the Swift version of a struct: struct S { let x, y: Float let z : (Float, Float, Float, Float) } This struct will be equal to B in Metal and not A. A tuple is like a packed_floatN. All of this also applies to other vector types such as packed_float3, packed_short2, ect.
Metal³
38,773,807
13
Is it possible to import or include metal file into another metal file? Say I have a metal file with all the math functions and I will only include or import it if it is needed in my metal project. Is it possible? I tried: #include "sdf.metal" and I got error: metallib: Multiply defined symbols _Z4vmaxDv2_f Command/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/usr/bin/metallib failed with exit code 1 Update: Here are both my shader files: SDF.metal: #ifndef MYAPP_METAL_CONSTANTS #define MYAPP_METAL_CONSTANTS #include <metal_stdlib> namespace metal { float kk(float2 v) { return max(v.x, v.y); } float kkk(float3 v) { return max(max(v.x, v.y), v.z); } } #endif And Shaders.metal: #include <metal_stdlib> #include "SDF.metal" using namespace metal; float fBoxCheap(float3 p, float3 b) { //cheap box return kkk(abs(p) - b); } float map( float3 p ) { float box2 = fBoxCheap(p-float3(0.0,3.0,0.0),float3(4.0,3.0,1.0)); return box2; } float3 getNormal( float3 p ) { float3 e = float3( 0.001, 0.00, 0.00 ); float deltaX = map( p + e.xyy ) - map( p - e.xyy ); float deltaY = map( p + e.yxy ) - map( p - e.yxy ); float deltaZ = map( p + e.yyx ) - map( p - e.yyx ); return normalize( float3( deltaX, deltaY, deltaZ ) ); } float trace( float3 origin, float3 direction, thread float3 &p ) { float totalDistanceTraveled = 0.0; for( int i=0; i <64; ++i) { p = origin + direction * totalDistanceTraveled; float distanceFromPointOnRayToClosestObjectInScene = map( p ); totalDistanceTraveled += distanceFromPointOnRayToClosestObjectInScene; if( distanceFromPointOnRayToClosestObjectInScene < 0.0001 ) { break; } if( totalDistanceTraveled > 10000.0 ) { totalDistanceTraveled = 0.0000; break; } } return totalDistanceTraveled; } float3 calculateLighting(float3 pointOnSurface, float3 surfaceNormal, float3 lightPosition, float3 cameraPosition) { float3 fromPointToLight = normalize(lightPosition - pointOnSurface); float diffuseStrength = clamp( dot( surfaceNormal, fromPointToLight ), 0.0, 1.0 ); float3 diffuseColor = diffuseStrength * float3( 1.0, 0.0, 0.0 ); float3 reflectedLightVector = normalize( reflect( -fromPointToLight, surfaceNormal ) ); float3 fromPointToCamera = normalize( cameraPosition - pointOnSurface ); float specularStrength = pow( clamp( dot(reflectedLightVector, fromPointToCamera), 0.0, 1.0 ), 10.0 ); // Ensure that there is no specular lighting when there is no diffuse lighting. specularStrength = min( diffuseStrength, specularStrength ); float3 specularColor = specularStrength * float3( 1.0 ); float3 finalColor = diffuseColor + specularColor; return finalColor; } kernel void compute(texture2d<float, access::write> output [[texture(0)]], constant float &timer [[buffer(1)]], constant float &mousex [[buffer(2)]], constant float &mousey [[buffer(3)]], uint2 gid [[thread_position_in_grid]]) { int width = output.get_width(); int height = output.get_height(); float2 uv = float2(gid) / float2(width, height); uv = uv * 2.0 - 1.0; // scale proportionately. if(width > height) uv.x *= float(width)/float(height); if(width < height) uv.y *= float(height)/float(width); float posx = mousex * 2.0 - 1.0; float posy = mousey * 2.0 - 1.0; float3 cameraPosition = float3( posx * 0.01,posy * 0.01, -10.0 ); float3 cameraDirection = normalize( float3( uv.x, uv.y, 1.0) ); float3 pointOnSurface; float distanceToClosestPointInScene = trace( cameraPosition, cameraDirection, pointOnSurface ); float3 finalColor = float3(1.0); if( distanceToClosestPointInScene > 0.0 ) { float3 lightPosition = float3( 5.0, 2.0, -10.0 ); float3 surfaceNormal = getNormal( pointOnSurface ); finalColor = calculateLighting( pointOnSurface, surfaceNormal, lightPosition, cameraPosition ); } output.write(float4(float3(finalColor), 1), gid); } Update2: and my MetalView.swift: import MetalKit public class MetalView: MTKView, NSWindowDelegate { var queue: MTLCommandQueue! = nil var cps: MTLComputePipelineState! = nil var timer: Float = 0 var timerBuffer: MTLBuffer! var mousexBuffer: MTLBuffer! var mouseyBuffer: MTLBuffer! var pos: NSPoint! var floatx: Float! var floaty: Float! required public init(coder: NSCoder) { super.init(coder: coder) self.framebufferOnly = false device = MTLCreateSystemDefaultDevice() registerShaders() } override public func drawRect(dirtyRect: NSRect) { super.drawRect(dirtyRect) if let drawable = currentDrawable { let command_buffer = queue.commandBuffer() let command_encoder = command_buffer.computeCommandEncoder() command_encoder.setComputePipelineState(cps) command_encoder.setTexture(drawable.texture, atIndex: 0) command_encoder.setBuffer(timerBuffer, offset: 0, atIndex: 1) command_encoder.setBuffer(mousexBuffer, offset: 0, atIndex: 2) command_encoder.setBuffer(mouseyBuffer, offset: 0, atIndex: 3) update() let threadGroupCount = MTLSizeMake(8, 8, 1) let threadGroups = MTLSizeMake(drawable.texture.width / threadGroupCount.width, drawable.texture.height / threadGroupCount.height, 1) command_encoder.dispatchThreadgroups(threadGroups, threadsPerThreadgroup: threadGroupCount) command_encoder.endEncoding() command_buffer.presentDrawable(drawable) command_buffer.commit() } } func registerShaders() { queue = device!.newCommandQueue() do { let library = device!.newDefaultLibrary()! let kernel = library.newFunctionWithName("compute")! timerBuffer = device!.newBufferWithLength(sizeof(Float), options: []) mousexBuffer = device!.newBufferWithLength(sizeof(Float), options: []) mouseyBuffer = device!.newBufferWithLength(sizeof(Float), options: []) cps = try device!.newComputePipelineStateWithFunction(kernel) } catch let e { Swift.print("\(e)") } } func update() { timer += 0.01 var bufferPointer = timerBuffer.contents() memcpy(bufferPointer, &timer, sizeof(Float)) bufferPointer = mousexBuffer.contents() memcpy(bufferPointer, &floatx, sizeof(NSPoint)) bufferPointer = mouseyBuffer.contents() memcpy(bufferPointer, &floaty, sizeof(NSPoint)) } override public func mouseDragged(event: NSEvent) { pos = convertPointToLayer(convertPoint(event.locationInWindow, fromView: nil)) let scale = layer!.contentsScale pos.x *= scale pos.y *= scale floatx = Float(pos.x) floaty = Float(pos.y) debugPrint("Hello",pos.x,pos.y) } } Update 3 After implement as per KickimusButticus's solution, the shader did compile. However I have another error:
Your setup is incorrect (EDIT: And so was my setup in my other answer and the previous version of this answer.) You can use a header just like in C++ (Metal is based on C++11, after all...). All you need one is more file, I'll call it SDF.h. The file includes function prototype declarations without a namespace declaration. And you need to #include it after the using namespace metal; declaration in your other files. Make sure the header file is not a .metal file and that it is not in the Compile Sources list in your Build Phases. If the header is being treated as a compiled source, that's most likely what's causing the CompilerError. SDF.h: // SDFHeaders.metal #ifndef SDF_HEADERS #define SDF_HEADERS float kk(float2 v); float kkk(float3 v); #endif SDF.metal: #include <metal_stdlib> using namespace metal; #include "SDF.h" float kk(float2 v) { return max(v.x, v.y); } float kkk(float3 v) { return max(max(v.x, v.y), v.z); } Shaders.metal: Here is where you use the functions after including SDF.h. // Shaders.metal #include <metal_stdlib> using namespace metal; #include "SDF.h" float fBoxCheap(float3 p, float3 b) { //cheap box return kkk(abs(p) - b); } // ... And of course, build after cleaning. Good luck!
Metal³
39,283,565
13
I got this passthrough vertex shader I used from Apple's sample code: vertex VertexIO vertexPassThrough(device packed_float4 *pPosition [[ buffer(0) ]], device packed_float2 *pTexCoords [[ buffer(1) ]], uint vid [[ vertex_id ]]) { VertexIO outVertex; outVertex.position = pPosition[vid]; outVertex.textureCoord = pTexCoords[vid]; return outVertex; } This worked in Swift 4/Xcode 10/iOS 12. Now I with Swift 5/Xcode 11/iOS 13, I get this warning: writable resources in non-void vertex function
You need to ensure the shader can only read from those buffers, so you need to change the declaration to const device: vertex VertexIO vertexPassThrough(const device packed_float4 *pPosition [[ buffer(0) ]], const device packed_float2 *pTexCoords [[ buffer(1) ]], uint vid [[ vertex_id ]]) { ... }
Metal³
57,692,571
13
I want to pass a float to my metal shader. I cannot figure out how. Here is my shader: vertex float4 model_vertex(unsigned int iid[[instance_id]] constant float angle) { float number = float(iid) / 64.0; return float4(number * sin(angle), number * cos(angle), 0.0, 1.0); } Now I want to pass it to the shader: let renderPassDescriptor = MTLRenderPassDescriptor() let renderEncoder = commandBuffer.renderCommandEncoderWithDescriptor(renderPassDescriptor) // ... let angle: Float = 0.5 renderEncoder.setUniform1(angle) // What do I do here? How do I pass the single float value?
Also in 10.11+ and iOS 9+ you can use: public func setVertexBytes(bytes: UnsafePointer<Void>, length: Int, atIndex index: Int) Which is documented to be better than creating a MTLBuffer if you're only using the buffer once (and your data is less than 4K long).
Metal³
26,298,840
12
This is an absolute beginner question. Background: I’m not really a game developer, but I’m trying to learn the basics of low-level 3D programming, because it’s a fun and interesting topic. I’ve picked Apple’s Metal as the graphics framework. I know about SceneKit and other higher level frameworks, but I’m intentionally trying to learn the low level bits. Unfortunately I’m way out of my depth, and there seems to be very little beginner-oriented Metal resources on the web. By reading the Apple documentation and following the tutorials I could find, I’ve managed to implement a simple vertex shader and a fragment shader and draw an actual 3D model on the screen. Now I’m trying to draw a second a model, but I’m kind of stuck, because I’m not really sure what’s really the best way to go about it. Do I… Use a single vertex buffer and index buffer for all of my models, and tell the MTLRenderCommandEncoder the offsets when rendering the individual models? Have a separate vertex buffer / index buffer for each model? Would such an approach scale? Something else? TL;DR: What is the recommended way to store the vertex data of multiple models in Metal (or any other 3D framework)?
There is no one recommended way. When you're working at such a low level as Metal, there are many possibilities, and the one you pick depends heavily on the situation and what performance characteristics you want/need to optimize for. If you're just playing around with intro projects, most of these decisions are irrelevant, because the performance issues won't bite until you scale up to a "real" project. Typically, game engines use one buffer (or set of vertex/index buffers) per model, especially if each model requires different render states (e.g. shaders, bound textures). This means that when new models are introduced to the scene or old ones no longer needed, the requisite resources can be loaded into / removed from GPU memory (by way of creating / destroying MTL objects). The main use case for doing multiple draws out of (different parts of) the same buffer is when you're mutating the buffer. For example, on frame n you're using the first 1KB of a buffer to draw with, while at the same time you're computing / streaming in new vertex data and writing it to the second 1KB of the buffer... then for frame n + 1 you switch which parts of the buffer are being used for what.
Metal³
34,485,259
12
I am trying to compute sum of large array in parallel with metal swift. Is there a god way to do it? My plane was that I divide my array to sub arrays, compute sum of one sub arrays in parallel and then when parallel computation is finished compute sum of sub sums. for example if I have array = [a0,....an] I divide array in sub arrays : array_1 = [a_0,...a_i], array_2 = [a_i+1,...a_2i], .... array_n/i = [a_n-1, ... a_n] sums for this arrays is computed in parallel and I get sum_1, sum_2, sum_3, ... sum_n/1 at the end just compute sum of sub sums. I create application which run my metal shader, but some things I don't understand quite. var array:[[Float]] = [[1,2,3], [4,5,6], [7,8,9]] // get device let device: MTLDevice! = MTLCreateSystemDefaultDevice() // get library let defaultLibrary:MTLLibrary! = device.newDefaultLibrary() // queue let commandQueue:MTLCommandQueue! = device.newCommandQueue() // function let kernerFunction: MTLFunction! = defaultLibrary.newFunctionWithName("calculateSum") // pipeline with function let pipelineState: MTLComputePipelineState! = try device.newComputePipelineStateWithFunction(kernerFunction) // buffer for function let commandBuffer:MTLCommandBuffer! = commandQueue.commandBuffer() // encode function let commandEncoder:MTLComputeCommandEncoder = commandBuffer.computeCommandEncoder() // add function to encode commandEncoder.setComputePipelineState(pipelineState) // options let resourceOption = MTLResourceOptions() let arrayBiteLength = array.count * array[0].count * sizeofValue(array[0][0]) let arrayBuffer = device.newBufferWithBytes(&array, length: arrayBiteLength, options: resourceOption) commandEncoder.setBuffer(arrayBuffer, offset: 0, atIndex: 0) var result:[Float] = [0,0,0] let resultBiteLenght = sizeofValue(result[0]) let resultBuffer = device.newBufferWithBytes(&result, length: resultBiteLenght, options: resourceOption) commandEncoder.setBuffer(resultBuffer, offset: 0, atIndex: 1) let threadGroupSize = MTLSize(width: 1, height: 1, depth: 1) let threadGroups = MTLSize(width: (array.count), height: 1, depth: 1) commandEncoder.dispatchThreadgroups(threadGroups, threadsPerThreadgroup: threadGroupSize) commandEncoder.endEncoding() commandBuffer.commit() commandBuffer.waitUntilCompleted() let data = NSData(bytesNoCopy: resultBuffer.contents(), length: sizeof(Float), freeWhenDone: false) data.getBytes(&result, length: result.count * sizeof(Float)) print(result) is my Swift code, my shader is : kernel void calculateSum(const device float *inFloat [[buffer(0)]], device float *result [[buffer(1)]], uint id [[ thread_position_in_grid ]]) { float * f = inFloat[id]; float sum = 0; for (int i = 0 ; i < 3 ; ++i) { sum = sum + f[i]; } result = sum; } I don't know how to defined that inFloat is array of array. I don't know exactly what is threadGroupSize and threadGroups. I don't know what is device and uint in shader properties. Is this right approach?
I took the time to create a fully working example of this problem with Metal. The explanation is in the comments: let count = 10_000_000 let elementsPerSum = 10_000 // Data type, has to be the same as in the shader typealias DataType = CInt let device = MTLCreateSystemDefaultDevice()! let library = self.library(device: device) let parsum = library.makeFunction(name: "parsum")! let pipeline = try! device.makeComputePipelineState(function: parsum) // Our data, randomly generated: var data = (0..<count).map{ _ in DataType(arc4random_uniform(100)) } var dataCount = CUnsignedInt(count) var elementsPerSumC = CUnsignedInt(elementsPerSum) // Number of individual results = count / elementsPerSum (rounded up): let resultsCount = (count + elementsPerSum - 1) / elementsPerSum // Our data in a buffer (copied): let dataBuffer = device.makeBuffer(bytes: &data, length: MemoryLayout<DataType>.stride * count, options: [])! // A buffer for individual results (zero initialized) let resultsBuffer = device.makeBuffer(length: MemoryLayout<DataType>.stride * resultsCount, options: [])! // Our results in convenient form to compute the actual result later: let pointer = resultsBuffer.contents().bindMemory(to: DataType.self, capacity: resultsCount) let results = UnsafeBufferPointer<DataType>(start: pointer, count: resultsCount) let queue = device.makeCommandQueue()! let cmds = queue.makeCommandBuffer()! let encoder = cmds.makeComputeCommandEncoder()! encoder.setComputePipelineState(pipeline) encoder.setBuffer(dataBuffer, offset: 0, index: 0) encoder.setBytes(&dataCount, length: MemoryLayout<CUnsignedInt>.size, index: 1) encoder.setBuffer(resultsBuffer, offset: 0, index: 2) encoder.setBytes(&elementsPerSumC, length: MemoryLayout<CUnsignedInt>.size, index: 3) // We have to calculate the sum `resultCount` times => amount of threadgroups is `resultsCount` / `threadExecutionWidth` (rounded up) because each threadgroup will process `threadExecutionWidth` threads let threadgroupsPerGrid = MTLSize(width: (resultsCount + pipeline.threadExecutionWidth - 1) / pipeline.threadExecutionWidth, height: 1, depth: 1) // Here we set that each threadgroup should process `threadExecutionWidth` threads, the only important thing for performance is that this number is a multiple of `threadExecutionWidth` (here 1 times) let threadsPerThreadgroup = MTLSize(width: pipeline.threadExecutionWidth, height: 1, depth: 1) encoder.dispatchThreadgroups(threadgroupsPerGrid, threadsPerThreadgroup: threadsPerThreadgroup) encoder.endEncoding() var start, end : UInt64 var result : DataType = 0 start = mach_absolute_time() cmds.commit() cmds.waitUntilCompleted() for elem in results { result += elem } end = mach_absolute_time() print("Metal result: \(result), time: \(Double(end - start) / Double(NSEC_PER_SEC))") result = 0 start = mach_absolute_time() data.withUnsafeBufferPointer { buffer in for elem in buffer { result += elem } } end = mach_absolute_time() print("CPU result: \(result), time: \(Double(end - start) / Double(NSEC_PER_SEC))") I used my Mac to test it, but it should work just fine on iOS. Output: Metal result: 494936505, time: 0.024611456 CPU result: 494936505, time: 0.163341018 The Metal version is about 7 times faster. I'm sure you can get more speed if you implement something like divide-and-conquer with cutoff or whatever.
Metal³
38,164,634
12
I have a Metal texture, I want to access its data from Swift by making it a float4 array (so that I can access each pixel 4 color components). I discovered this method of MTLTexture : getBytes(UnsafeMutableRawPointer, bytesPerRow: Int, bytesPerImage: Int, from: MTLRegion, mipmapLevel: Int, slice: Int) I don't know at all how to use UnsafeMutableRawPointer, how it works, and how to get the data back into a simple Swift array. My first attempt was to create a pointer and allocate enough space like that, but I don't even know if that's what I should do: var pointer = UnsafeMutableRawPointer.allocate(bytes: myTextureSizeInBytes, alignedTo: 0) Then I have no idea at all of how to get this data back into a standard Swift array... Thank you.
First, let's assume you have a UnsafeRawPointer and a length: let ptr: UnsafeRawPointer = ... let length: Int = ... Now you want to convert that to an [float4]. First, you can convert your UnsafeRawPointer to a typed pointer by binding it to a type: let float4Ptr = ptr.bindMemory(to: float4.self, capacity: length) Now you can convert that to a typed buffer pointer: let float4Buffer = UnsafeBufferPointer(start: float4Ptr, count: length) And since a buffer is a collection, you can initialize an array with it: let output = Array(float4Buffer) For much more on working with UnsafeRawPointer, see SE-0138, SE-0107, and the UnsafeRawPointer Migration Guide.
Metal³
41,574,498
12
Whenever I build a project that includes a metal shader to an x86_64 target (iOS simulator), I get a dependency analysis warning: warning: no rule to process file '[File Path]/Shaders.metal' of type sourcecode.metal for architecture x86_64 I know this isn't a huge issue but I like to keep my projects free from warnings when I build, so that when a real issue does arise, I actually notice the yellow warning triangle. Any quick way to get Xcode to ignore metal files for simulator targets?
You can resolve this by precompiling your .metal file into a Metal library during the build step, and removing the .metal source code from your app target. Remove .metal file from target Select your .metal file in the project navigator, and uncheck the target that is giving you the warning. Metal library compile script Create a bash script called CompileMetalLib.sh in your project, alongside your .metal file, with contents like this: xcrun -sdk iphoneos metal -c MyShader.metal -o MyShader.air xcrun -sdk iphoneos metallib MyShader.air -o MyShader.metallib rm MyShader.air Make sure to give it executable permissions by running chmod +x CompileMetalLib.sh. MyShader.air is the intermediate compile step, and MyShader.metallib is the fully compiled metal library. Read all about compiling a Metal file here If you're compiling for OS X, change iphoneos to macosx. Run compile script during build Now you'll want to trigger this script in your Build Phases. Add a New Run Script Phase to your target. The contents should look like this: cd ${SRCROOT}/path/to/folder/containing/yourshader ./CompileMetalLib.sh It's important to drag this step so that it happens before the Copy Bundle Resources step. Change your code to use your compiled Metal library You'll now load the compiled Metal library from your app bundle. Here's some pseudo-Swift: let metalLibPath = Bundle.main.path(forResource: "MyShader", ofType: "metallib") let myLibrary = try metalDevice.makeLibrary(filepath: metalLibPath) Result You are manually compiling your .metal file with an external script, and copying the compiled library into your bundle resources. Your code loads this library. Now that you don't have any .metal files in your target, the simulator will no longer throw a warning about not being able to compile for x86_64.
Metal³
46,254,475
12
I am going through the Metal iOS Swift example trying to understand the triple buffering practice they suggest. This is shown inside of the demo for the uniform animations. As I understand it aligned memory simply starts at a specefic increment that is a multiple of some byte amount that the device really likes. My confusion is this line of code // The 256 byte aligned size of our uniform structure let alignedUniformsSize = (MemoryLayout<Uniforms>.size & ~0xFF) + 0x100 they use it to find the size and byte of the Uniforms struct. I am confused about why there are binary operations here I am really not sure what they do. If it helps this aligned size is used to create a buffer like this. I am fairly sure that buffer allocates byte aligned memory automatically and is henceforth used as the memory storage location for the uniforms. let buffer = self.device.makeBuffer(length:alignedUniformsSize * 3, options:[MTLResourceOptions.storageModeShared]) So essentially rather than going through the trouble of allocating byte aligned memory by yourself they let metal do it for them. Is there any reason that the strategy they used when they did let allignedUniformsSize = would not work for other types such as Int or Float etc?
Let's talk first about why you'd want aligned buffers, then we can talk about the bitwise arithmetic. Our goal is to allocate a Metal buffer that can store three (triple-buffered) copies of our uniforms (so that we can write to one part of the buffer while the GPU reads from another). In order to read from each of these three copies, we supply an offset when binding the buffer, something like currentBufferIndex * uniformsSize. Certain Metal devices require these offsets to be multiples of 256, so we instead need to use something like currentBufferIndex * alignedUniformsSize as our offset. How do we "round up" an integer to the next highest multiple of 256? We can do it by dropping the lowest 8 bits of the "unaligned" size, effectively rounding down, then adding 256, which gets us the next highest multiple. The rounding down part is achieved by bitwise ANDing with the 1's complement (~) of 255, which (in 32-bit) is 0xFFFFFF00. The rounding up is done by just adding 0x100, which is 256. Interestingly, if the base size is already aligned, this technique spuriously rounds up anyway (e.g., from 256 to 512). For the cost of an integer divide, you can avoid this waste: let alignedUniformsSize = ((MemoryLayout<Uniforms>.size + 255) / 256) * 256
Metal³
46,431,114
12
I have a Metal fragment shader that returns some transparent colors with an alpha channel and I'd like to reveal a UIView under the MTKView, but they only background result I get is black and "error noise". MTLRenderPipelineDescriptor: pipelineStateDescriptor.isAlphaToCoverageEnabled = true pipelineStateDescriptor.colorAttachments[0].pixelFormat = .bgra8Unorm pipelineStateDescriptor.colorAttachments[0].isBlendingEnabled = true pipelineStateDescriptor.colorAttachments[0].destinationRGBBlendFactor = .oneMinusSourceAlpha pipelineStateDescriptor.colorAttachments[0].destinationAlphaBlendFactor = .oneMinusSourceAlpha MTLRenderPassDescriptor: renderPassDescriptor.colorAttachments[0].loadAction = .clear renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColor(red: 0, green: 0, blue: 0, alpha: 0) If I change the clear color I can see it under the transparent colors, tho if I skip the clear color I see "error noise". Does the clear color alpha channel actually do anything? Does anyone know how to make a MTKView transparent? Update: Here's the magical property to make a MTKView transparent: self.isOpaque = false
If a UIView may have transparent content or otherwise fail to fill itself with opaque drawing, then it should set its opaque (isOpaque) property to false so that it will be properly composited with whatever is behind it. Since, MTKView is a subclass of UIView, this applies to it as well.
Metal³
47,643,974
12
What I am Trying to Do I am trying to show filters on a camera feed by using a Metal view: MTKView. I am closely following the method of Apple's sample code - Enhancing Live Video by Leveraging TrueDepth Camera Data (link). What I Have So Far Following code works great (mainly interpreted from above-mentioned sample code) : class MetalObject: NSObject, MTKViewDelegate { private var metalBufferView : MTKView? private var metalDevice = MTLCreateSystemDefaultDevice() private var metalCommandQueue : MTLCommandQueue! private var ciContext : CIContext! private let colorSpace = CGColorSpaceCreateDeviceRGB() private var videoPixelBuffer : CVPixelBuffer? private let syncQueue = DispatchQueue(label: "Preview View Sync Queue", qos: .userInitiated, attributes: [], autoreleaseFrequency: .workItem) private var textureWidth : Int = 0 private var textureHeight : Int = 0 private var textureMirroring = false private var sampler : MTLSamplerState! private var renderPipelineState : MTLRenderPipelineState! private var vertexCoordBuffer : MTLBuffer! private var textCoordBuffer : MTLBuffer! private var internalBounds : CGRect! private var textureTranform : CGAffineTransform? private var previewImage : CIImage? init(with frame: CGRect) { super.init() self.metalBufferView = MTKView(frame: frame, device: self.metalDevice) self.metalBufferView!.contentScaleFactor = UIScreen.main.nativeScale self.metalBufferView!.framebufferOnly = true self.metalBufferView!.colorPixelFormat = .bgra8Unorm self.metalBufferView!.isPaused = true self.metalBufferView!.enableSetNeedsDisplay = false self.metalBufferView!.delegate = self self.metalCommandQueue = self.metalDevice!.makeCommandQueue() self.ciContext = CIContext(mtlDevice: self.metalDevice!) //Configure Metal let defaultLibrary = self.metalDevice!.makeDefaultLibrary()! let pipelineDescriptor = MTLRenderPipelineDescriptor() pipelineDescriptor.colorAttachments[0].pixelFormat = .bgra8Unorm pipelineDescriptor.vertexFunction = defaultLibrary.makeFunction(name: "vertexPassThrough") pipelineDescriptor.fragmentFunction = defaultLibrary.makeFunction(name: "fragmentPassThrough") // To determine how our textures are sampled, we create a sampler descriptor, which // will be used to ask for a sampler state object from our device below. let samplerDescriptor = MTLSamplerDescriptor() samplerDescriptor.sAddressMode = .clampToEdge samplerDescriptor.tAddressMode = .clampToEdge samplerDescriptor.minFilter = .linear samplerDescriptor.magFilter = .linear sampler = self.metalDevice!.makeSamplerState(descriptor: samplerDescriptor) do { renderPipelineState = try self.metalDevice!.makeRenderPipelineState(descriptor: pipelineDescriptor) } catch { fatalError("Unable to create preview Metal view pipeline state. (\(error))") } } final func update (newVideoPixelBuffer: CVPixelBuffer?) { self.syncQueue.async { var filteredImage : CIImage self.videoPixelBuffer = newVideoPixelBuffer //--------- //Core image filters //Strictly CIFilters, chained together //--------- self.previewImage = filteredImage //Ask Metal View to draw self.metalBufferView?.draw() } } //MARK: - Metal View Delegate final func draw(in view: MTKView) { print (Thread.current) guard let drawable = self.metalBufferView!.currentDrawable, let currentRenderPassDescriptor = self.metalBufferView!.currentRenderPassDescriptor, let previewImage = self.previewImage else { return } // create a texture for the CI image to render to let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor( pixelFormat: .bgra8Unorm, width: Int(previewImage.extent.width), height: Int(previewImage.extent.height), mipmapped: false) textureDescriptor.usage = [.shaderWrite, .shaderRead] let texture = self.metalDevice!.makeTexture(descriptor: textureDescriptor)! if texture.width != textureWidth || texture.height != textureHeight || self.metalBufferView!.bounds != internalBounds { setupTransform(width: texture.width, height: texture.height, mirroring: mirroring, rotation: rotation) } // Set up command buffer and encoder guard let commandQueue = self.metalCommandQueue else { print("Failed to create Metal command queue") return } guard let commandBuffer = commandQueue.makeCommandBuffer() else { print("Failed to create Metal command buffer") return } // add rendering of the image to the command buffer ciContext.render(previewImage, to: texture, commandBuffer: commandBuffer, bounds: previewImage.extent, colorSpace: self.colorSpace) guard let commandEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: currentRenderPassDescriptor) else { print("Failed to create Metal command encoder") return } // add vertex and fragment shaders to the command buffer commandEncoder.label = "Preview display" commandEncoder.setRenderPipelineState(renderPipelineState!) commandEncoder.setVertexBuffer(vertexCoordBuffer, offset: 0, index: 0) commandEncoder.setVertexBuffer(textCoordBuffer, offset: 0, index: 1) commandEncoder.setFragmentTexture(texture, index: 0) commandEncoder.setFragmentSamplerState(sampler, index: 0) commandEncoder.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4) commandEncoder.endEncoding() commandBuffer.present(drawable) // Draw to the screen commandBuffer.commit() } final func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) { } } Notes The reason MTKViewDelegate is used instead of subclassing MTKView is that when it was subclassed, the draw call was called on the main thread. With the delegate method shown above, it seems to be a different metal related thread call each loop. Above method seem to give much better performance. Details on CIFilter usage on update method above had to be redacted. All it is a heavy chain of CIFilters stacked. Unfortunately there is no room for any tweaks with these filters. Issue Above code seems to slow down the main thread a lot, causing rest of the app UI to be choppy. For example, scrolling a UIScrollview gets seem to be slow and choppy. Goal Tweak Metal view to ease up on CPU and go easy on the main thread to leave enough juice for rest of the UI. According to the above graphics, preparation of command buffer is all done in CPU until presented and committed(?). Is there a way to offload that from CPU? Any hints, feedback, tips, etc to improve the drawing efficiency would be appreciated.
There are a few things you can do to improve the performance: Render into the view’s drawable directly instead of rendering into a texture and then rendering again to render that texture into the view. Use the newish CIRenderDestination API to defer the actual texture retrieval to the moment the view is actually rendered to (i.e. when Core Image is done). Here’s the draw(in view: MTKView) I’m using in my Core Image project, modified for your case: public func draw(in view: MTKView) { if let currentDrawable = view.currentDrawable, let commandBuffer = self.commandQueue.makeCommandBuffer() { let drawableSize = view.drawableSize // optional: scale the image to fit the view let scaleX = drawableSize.width / image.extent.width let scaleY = drawableSize.height / image.extent.height let scale = min(scaleX, scaleY) let scaledImage = previewImage.transformed(by: CGAffineTransform(scaleX: scale, y: scale)) // optional: center in the view let originX = max(drawableSize.width - scaledImage.extent.size.width, 0) / 2 let originY = max(drawableSize.height - scaledImage.extent.size.height, 0) / 2 let centeredImage = scaledImage.transformed(by: CGAffineTransform(translationX: originX, y: originY)) // create a render destination that allows to lazily fetch the target texture // which allows the encoder to process all CI commands _before_ the texture is actually available; // this gives a nice speed boost because the CPU doesn’t need to wait for the GPU to finish // before starting to encode the next frame let destination = CIRenderDestination(width: Int(drawableSize.width), height: Int(drawableSize.height), pixelFormat: view.colorPixelFormat, commandBuffer: commandBuffer, mtlTextureProvider: { () -> MTLTexture in return currentDrawable.texture }) let task = try! self.context.startTask(toRender: centeredImage, to: destination) // bonus: you can Quick Look the task to see what’s actually scheduled for the GPU commandBuffer.present(currentDrawable) commandBuffer.commit() // optional: you can wait for the task execution and Quick Look the info object to get insights and metrics DispatchQueue.global(qos: .background).async { let info = try! task.waitUntilCompleted() } } } If this is still too slow, you can try setting the priorityRequestLow CIContextOption when creating your CIContext to tell Core Image to render in low priority.
Metal³
55,769,612
12
I'm an Objective-C / Swift software developer (in training) currently with an application in the App Store. But I have recently really been interested in Metal, Apple's new alternative to OpenGL. But I'm not exactly sure how to begin... Apple's documentation is handy, but only if you really know what you are doing in the first place. Are there any good tips to beginning? I really haven't explored the area of 3D game programming so would you recommend I learn another language first? And if so which? I'm just looking for good tutorial books or sites that get in depth with the how and why. I like to understand what exactly I'm doing instead of just typing in code, but there are so many languages and beginner's books that I really don't know where to start... Can anyone help me with this?
Metal is a newer graphics API. So if you are new to graphics and 3D game programming you may want to start with OpenGL- specifically since there are many textbooks out there that teaches you fundamentals of graphics using OpenGL. As for a web page on OpenGL, one of my favorite ones is songho. Books on OpenGL ES2.0 and OpenGL ES3.0 includes sections on programming with OpenGLES for iOS. If you would specifically like to explore Metal API itself, there are multiple videos from WWDC 2014 that details on the basics and has demos on how to do work with Metal in Xcode. An introductory article that details on what and why with respect to Metal is the objc.io one. And a more detailed one is metalbyexample. To understand in depth as to how these graphics APIs work, you might need a grasp on GPU architecture. The Real-Time Rendering is probably the best book on the subject. These lecture videos from Prof. John Owens at UC Davis also describes the architecture in a clear and concise manner.
Metal³
28,229,199
11
I am trying to let a neural net run on metal. The basic idea is that of data duplication. Each gpu thread runs one version of the net for random data points. I have written other shaders that work fine. I also tried my code in a c++ command line app. No errors there. There is also no compile error. I used the apple documentation to convert to metal c++, since not everything from c++11 is supported. It crashes after it loads the kernel function and when it tries to assign newComputePipelineStateWithFunction to the metal device. This means there is a problem with the code that isn't caught at compile time. MCVE: kernel void net(const device float *inputsVector [[ buffer(0) ]], // layout of net * uint id [[ thread_position_in_grid ]]) { uint floatSize = sizeof(tempFloat); uint inputsVectorSize = sizeof(inputsVector) / floatSize; float newArray[inputsVectorSize]; float test = inputsVector[id]; newArray[id] = test; } Update It has everything to do with dynamic arrays. Since it fails to create the pipeline state and doesn't crash running the actual shader it must be a coding issue. Not an input issue. Assigning values from a dynamic array to a buffer makes it fail.
The real problem: It is a memory issue! To all the people saying that it was a memory issue, you were right! Here is some pseudo code to illustrate it. Sorry that it is in "Swift" but easier to read. Metal Shaders have a funky way of coming to life. They are first initialised without values to get the memory. It was this step that failed because it relied on a later step: setting the buffer. It all comes down to which values are available when. My understanding of newComputePipelineStateWithFunction was wrong. It is not simply getting the shader function. It is also a tiny step in the initialising process. class MetalShader { // buffers var aBuffer : [Float] var aBufferCount : Int // step One : newComputePipelineStateWithFunction memory init() { // assign shader memory // create memory for one int let aStaticValue : Int // create memory for one int var aNotSoStaticValue : Int // this wil succeed, assigns memory for one int // create memory for 10 floats var aStaticArray : [Float] = [Float](count: aStaticValue, repeatedValue: y) // this will succeed // create memory for x floats var aDynamicArray : [Float] = [Float](count: aBuffer.count, repeatedValue: y) // this will fail var aDynamicArray : [Float] = [Float](count: aBufferCount, repeatedValue: y) // this will fail let tempValue : Float // one float from a loop } // step Two : commandEncoder.setBuffer() assign buffers (buffers) { aBuffer = cpuMemoryBuffer } // step Three : commandEncoder.endEncoding() actual init() { // set shader values let aStaticValue : Int = 0 var aNotSoStaticValue : Int = aBuffer.count var aDynamicArray : [Float] = [Float](count: aBuffer.count, repeatedValue: 1) // this could work, but the app already crashed before getting to this point. } // step Four : commandBuffer.commit() func shaderFunction() { // do stuff for i in 0..<aBuffer.count { let tempValue = aBuffer[i] } } } Fix: I finally realised that buffers are technically dynamic arrays and instead of creating arrays inside the shader, I could also just add more buffers. This obviously works.
Metal³
32,193,726
11
I'm seeing very inconsistent frame rates in the SceneKit starter project. Sometimes it runs constantly at 60 fps (12ms rendering, 6ms metal flush), and sometimes it runs constantly at 40 fps (20ms rendering, 6ms metal flush), no more, no less. The frame rate changes randomly when I reopen the app, and will stay at that frame rate until the next reopen. I tried switching to OpenGL ES, and while it seems to fix it in the starter project, I still see those drops in my real app. The starter project is unmodified (rotating ship), and I'm testing it on Xcode 7.0 and iPad Mini 4 running iOS 9.0.1. I'm not sure what is causing the problem, SceneKit, iOS or my device. Edit: Here is a metal system trace, the first part it was running at 60fps, the second part I press the home button and reopen the app, and it runs at 40fps. It looks like there are a lot of color load/stores in the second part.
Unfortunately it looks like SceneKit (and SpriteKit) are in evolutionary stages of development, at the expense of those using them. This problem is definitely on all devices, and the following frameworks, that I know of: SceneKit SpriteKit Metal Even using OpenGL instead of Metal in the game frameworks the problem still exists, with no less consistency. It looks to be an attempt by iOS to fix the frame rate at 40fps if iOS determines there's an issue maintaining a steady 60fps. I think the cause of the drop to 40fps is iOS not being very good at interpreting "problems", and doing performance sampling over too short of a period at an unstable point in the app's launch, given many false positives for problems that aren't there once iOS itself has actually settled down and let the app/game run without hindrance. The default template with the jetFighter shouldn't ever have trouble running at 60fps. So it only makes sense that this framerate cap "feature" would become active if the polling by iOS to determine when to cap the game loop at 40fps is done too early in the launch, for too short of a time. This means any interruption in the first few frames of the game causes iOS to cap it at 40fps, pre-emptively thinking the game won't/can't maintain 60fps. Ironically, iOS is probably the cause of the hiccups it's detecting at the launch of the game that cause it to then consider the app unable to maintain a stable 60fps. BUT I AM SPECULATING! This is based on observation, not any known facts on the matter. But it's consistent with what I'm seeing happening and the only reasonable explanation I have so far. The "good news" is iOS is not sampling only once and leaving it. It samples the game spasmodically, and after interruptions like jumping out to the home screen and back into the app. For example: it's possible to cause a resampling of the framerate by iOS, and cause it to jump from 40 to 60, or 60 to 40, simply by starting Quicktime screenCapture whilst your device is connected. Apparently this (and a few other actions) will cause iOS to test the running app for its framerate consistency, again, then iOS adjusts according to its findings, again. And, after an arbitrary amount of time, it scans again. If you leave the JetFighter template running for a while, you'll also see that eventually iOS does another test of the framerate consistency, and often determines that it's now stable enough at 60fps to put it back up to 60fps, despite having initially decided it should only run at 40fps. I say all of this because I've watched a thing called "renderer" in the stats on the device deliberately taking up exactly the right amount of extra time in each gameloop to force 40fps, even when there's nowhere near enough other things going on to make that necessary. It occurs to me that Apple is working on variable frame rate technology as per their statements about the iPad Pro, and the iOS features to support that have been (seemingly) implemented ahead of the release of the screen technology, and badly and oddly testing running apps to determine when to forcibly roll them down to slower frame rates. Given that 40fps is an odd number not equally dividing into the default refresh rate of current devices at their 60fps refresh rate, it's likely the iPad Pro is capable of 120Hz screen refresh if they're so interested in 40fps. When capturing from current iPads, if it's framerate locked at 40fps by iOS I'm seeing a 2:1:2:1:2:1 frame sequence that's how you'd make 40fps on a 60Hz refreshing device. Which is in no way ideal. You wouldn't ever want to see this on a 60Hz screen because it's annoying, visually, even for people with insensitive eyes. Possibly this variable framerate technology permits true 40fps on the new iPhones, I don't know. Haven't seen anything tested yet, but it does seem odd that something likely only truly possible on the iPad Pro is causing this issue on everything at the moment.
Metal³
32,821,033
11
Can I debug my metal-shading-language code using console output (like print in Swift)? If yes, how? If no, are there any other ways to output variables from my .metal file? (Maybe by passing data from the .metal file to my .swift file using the commandEncoder-buffer?) I have unsuccessfully tried to pass a reference to an Int variable (which is in my .swift file) via the commandEncoder to my .metal file. In the .metal file, I assign a value to the int variable, but if I print the Int in my swift file, the assigned value is not there. .swift file: ... var myMetalOutput: Int = 0 ... let printBuffer = device.newBufferWithBytes(&myMetalOutput, length: sizeof(Int), options: MTLResourceOptions.CPUCacheModeDefaultCache) commandEncoder.setBuffer(printBuffer, offset: 0, atIndex: 8) ... commandBuffer.commit() drawable.present() print("myMetalOutput: \(myMetalOutput)") ... .metal file: ... kernel void shader(..., device int &printBuffer [[8]], ...) { ... printBuffer = 123; ... } The console output is always myMetalOutput: 0
Here's a working solution in case somebody needs it: let device = MTLCreateSystemDefaultDevice()! let commandQueue = device.newCommandQueue() let defaultLibrary = device.newDefaultLibrary()! let commandBuffer = commandQueue.commandBuffer() let computeCommandEncoder = commandBuffer.computeCommandEncoder() let program = defaultLibrary.newFunctionWithName("shader") do { let computePipelineFilter = try device.newComputePipelineStateWithFunction(program!) computeCommandEncoder.setComputePipelineState(computePipelineFilter) var resultdata = [Int](count: 1, repeatedValue: 0) let outVectorBuffer = device.newBufferWithBytes(&resultdata, length: sizeofValue(1), options: MTLResourceOptions.CPUCacheModeDefaultCache) computeCommandEncoder.setBuffer(outVectorBuffer, offset: 0, atIndex: 0) let threadsPerGroup = MTLSize(width:1,height:1,depth:1) let numThreadgroups = MTLSize(width:1, height:1, depth:1) computeCommandEncoder.dispatchThreadgroups(numThreadgroups, threadsPerThreadgroup: threadsPerGroup) computeCommandEncoder.endEncoding() commandBuffer.addCompletedHandler {commandBuffer in let data = NSData(bytes: outVectorBuffer.contents(), length: sizeof(NSInteger)) var out: NSInteger = 0 data.getBytes(&out, length: sizeof(NSInteger)) print("data: \(out)") } commandBuffer.commit() } catch { fatalError("newComputePipelineStateWithFunction failed ") } The shader: kernel void shader(device int &printBuffer [[buffer(0)]], uint id [[ thread_position_in_grid ]]) { printBuffer = 123; }
Metal³
35,985,353
11
I'm looking for just a working Metal shader that works in SceneKit with SCNProgram. Can someone show me the correct method declarations/how to hook this up? let program = SCNProgram() program.vertexFunctionName = "myVertex" program.fragmentFunctionName = "myFragment" material.program = program and then the shader //MyShader.metal vertex something myVertex(something) { return something; } fragment float4 myFragment(something) { return something } I'm just looking for the most basic example please.
I clipped out all the 'unnecessary' stuff, this is about as basic as it gets and pretty much what my first Metal shader was. Next I'd start looking into wiring up the other vertex attributes (colour, normals), and maybe do some basic lighting calculations. #include <metal_stdlib> using namespace metal; #include <SceneKit/scn_metal> struct MyNodeBuffer { float4x4 modelTransform; float4x4 modelViewTransform; float4x4 normalTransform; float4x4 modelViewProjectionTransform; }; typedef struct { float3 position [[ attribute(SCNVertexSemanticPosition) ]]; } MyVertexInput; struct SimpleVertex { float4 position [[position]]; }; vertex SimpleVertex myVertex(MyVertexInput in [[ stage_in ]], constant SCNSceneBuffer& scn_frame [[buffer(0)]], constant MyNodeBuffer& scn_node [[buffer(1)]]) { SimpleVertex vert; vert.position = scn_node.modelViewProjectionTransform * float4(in.position, 1.0); return vert; } fragment half4 myFragment(SimpleVertex in [[stage_in]]) { half4 color; color = half4(1.0 ,0.0 ,0.0, 1.0); return color; } Apologies for any typos, edited it down on my phone...
Metal³
37,697,939
11
When trying to use Metal to rapidly draw pixel buffers to the screen from memory, we create MTLBuffer objects using MTLDevice.makeBuffer(bytesNoCopy:..) to allow the GPU to directly read the pixels from memory without having to copy it. Shared memory is really a must-have for achieving good pixel transfer performance. The catch is that makeBuffer requires a page-aligned memory address and a page aligned length. Those requirements are not only in the documentation -- they are also enforced using runtime assertions. The code I am writing has to deal with a variety of incoming resolutions and pixel formats, and occasionally I get unaligned buffers or unaligned lengths. After researching this I discovered a hack that allows me to use shared memory for those instances. Basically what I do is I round the unaligned buffer address down to the nearest page boundary, and use the offset parameter from makeTexture to ensure that the GPU starts reading from the right place. Then I round up length to the nearest page size. Obviously that memory is going to be valid (because allocations can only occur on page boundaries), and I think it's safe to assume the GPU isn't writing to or corrupting that memory. Here is the code I'm using to allocate shared buffers from unaligned buffers: extension MTLDevice { func makeTextureFromUnalignedBuffer(textureDescriptor : MTLTextureDescriptor, bufferPtr : UnsafeMutableRawPointer, bufferLength : UInt, bytesPerRow : Int) -> MTLTexture? { var calculatedBufferLength = bufferLength let pageSize = UInt(getpagesize()) let pageSizeBitmask = UInt(getpagesize()) - 1 let alignedBufferAddr = UnsafeMutableRawPointer(bitPattern: UInt(bitPattern: bufferPtr) & ~pageSizeBitmask) let offset = UInt(bitPattern: bufferPtr) & pageSizeBitmask assert(bytesPerRow % 64 == 0 && offset % 64 == 0, "Supplied bufferPtr and bytesPerRow must be aligned on a 64-byte boundary!") calculatedBufferLength += offset if (calculatedBufferLength & pageSizeBitmask) != 0 { calculatedBufferLength &= ~(pageSize - 1) calculatedBufferLength += pageSize } let buffer = self.makeBuffer(bytesNoCopy: alignedBufferAddr!, length: Int(calculatedBufferLength), options: .storageModeShared, deallocator: nil) return buffer.makeTexture(descriptor: textureDescriptor, offset: Int(offset), bytesPerRow: bytesPerRow) } } I've tested this on numerous different buffers and it seems to work perfectly (only tested on iOS, not on macOS). My question is: Is this approach safe? Any obvious reasons why this wouldn't work? Then again, if it is safe, why were the requirements imposed in the first place? Why isn't the API just doing this for us?
I have submitted an Apple TSI (Technical Support Incident) for this question, and the answer is basically yes, it is safe. Here is the exact response in case anyone is interested: After discussing your approach with engineering we concluded that it was valid and safe. Some noteworthy quotes: “The framework shouldn’t care about the fact that the user doesn’t own the entire page, because it shouldn’t ever read before the offset where the valid data begins.” “It really shouldn’t [care], but in general if the developer can use page-allocators rather than malloc for their incoming images, that would be nice.” As to why the alignment constraints/assertions are in place: “Typically mapping memory you don’t own into another address space is a bit icky, even if it works in practice. This is one reason why we required mapping to be page aligned, because the hardware really is mapping (and gaining write access) to the entire page.”
Metal³
39,951,878
11
I'm recording the screen from my iPhone device to my Mac. As a preview layer, I am collecting sample buffers directly from an AVCaptureVideoDataOutput, from which I'm creating textures and rendering them with Metal. The problem I'm having is that code that worked in macOS prior to 10.13 stopped working after updating to 10.13. Namely, CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(_currentSampleBuffer); if (!imageBuffer) return; CVPixelBufferLockBaseAddress(imageBuffer,0); size_t width = CVPixelBufferGetWidth(imageBuffer); size_t height = CVPixelBufferGetHeight(imageBuffer); CVMetalTextureRef metalTexture = NULL; CVReturn result = CVMetalTextureCacheCreateTextureFromImage(nil, self.textureCache, imageBuffer, nil, self.pixelFormat, width, height, 0, &metalTexture); if (result == kCVReturnSuccess) { self.texture = CVMetalTextureGetTexture(metalTexture); } Returns result = -6660, which translates to a generic kCVReturnError, as can be seen on the official Apple docs, and the metalTexture = NULL. The pixel format I'm using is MTLPixelFormatBGRG422 since the samples coming from the camera are 2vuy. As a workaround to creating metalTexture from sampleBuffer, I am now creating an intermediate NSImage like so: CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(_currentSampleBuffer); NSCIImageRep *imageRep = [NSCIImageRep imageRepWithCIImage:[CIImage imageWithCVImageBuffer:imageBuffer]]; NSImage *image = [[NSImage alloc] initWithSize:[imageRep size]]; [image addRepresentation:imageRep]; and creating a MTLTexture from that. That is obviously a subpar solution to using CVMetalTextureCacheCreateTextureFromImage directly. Once again, the code in question works perfectly fine in macOS < 10.13, I'd like to know if anyone has similar issues, and if so, do you have any ideas how to overcome this?
I've come across the same issue, the problem was not asking for Metal compatibility when configuring the AVCaptureVideoDataOutput. I guess the system started to check this in macOS 10.13, possibly to apply some optimization when not requested. The solution was to add the kCVPixelBufferMetalCompatibilityKey to the videoSettings property of AVCaptureVideoDataOutput. In Objective-C: outputCapture.videoSettings = @{ /* ... */ (NSString *)kCVPixelBufferMetalCompatibilityKey: @YES }; In Swift: outputCapture.videoSettings = [ /* ... */ kCVPixelBufferMetalCompatibilityKey as String: true ] I think this warrants a radar, to ask Apple to at least print a warning message when this occurs. I'll update this if I get to it.
Metal³
46,549,906
11
If you're using iOS Metal, and you have custom programmatic object models, and (like me) your models stopped working with the advent of Metal2 and iOS 11, then you've probably started looking into how to programmatically generate an MDLMesh. Apple documentation says, "Typically, you obtain meshes by traversing the object hierarchy of a MDLAsset object, but you can also create meshes from your own vertex data or create parametric meshes." Unfortunately, it offers no instructions or sample code. You do quickly find the twin MDLMesh initialization calls, initWithVertexBuffer, and initWithVertexBuffers. Just as quickly you find no sample code or discussion on the web... at least I was not successful finding any. As it is not intuitively obvious to this casual observer how that should be done, code samples are herewith requested.
There are plenty of examples creating an MDLMesh using one of the factory parametric methods for e.g., a cube: [MDLMesh newBoxWithDimensions:... Using the most simple of these, for a "plane" (a rectangle), I generated a 1x1 rect with the minimum number of vertices: MDLMesh *mdlMesh = [MDLMesh newPlaneWithDimensions:(vector_float2){1.0, 1.0} segments:(vector_uint2){1, 1} geometryType:MDLGeometryTypeTriangles allocator:metalAllocator]; I then used the Xcode debugger to investigate what the resulting MDLMesh looked like, as a way to guide my creation of an even simpler object, a programmatic equilateral triangle. The following code works for me. I'm sure folks with more Metal savvy than me can offer better solutions. But this will hopefully get you started, in some semblance of the right direction... So until there is a new factory parametric method for [MDLMesh newEquilateralTriangleWithEdgeLength:... the following code seems to do the trick... static const float equilateralTriangleVertexData[] = { 0.000000, 0.577350, 0.0, -0.500000, -0.288675, 0.0, 0.500000, -0.288675, 0.0, }; static const vector_float3 equilateralTriangleVertexNormalsData[] = { { 0.0, 0.0, 1.0 }, { 0.0, 0.0, 1.0 }, { 0.0, 0.0, 1.0 }, }; static const vector_float2 equilateralTriangleVertexTexData[] = { { 0.50, 1.00 }, { 0.00, 0.00 }, { 1.00, 0.00 }, }; int numVertices = 3; int lenBufferForVertices_position = sizeof(equilateralTriangleVertexData); int lenBufferForVertices_normal = numVertices * sizeof(vector_float3); int lenBufferForVertices_textureCoordinate = numVertices * sizeof(vector_float2); MTKMeshBuffer *mtkMeshBufferForVertices_position = (MTKMeshBuffer *)[metalAllocator newBuffer:lenBufferForVertices_position type:MDLMeshBufferTypeVertex]; MTKMeshBuffer *mtkMeshBufferForVertices_normal = (MTKMeshBuffer *)[metalAllocator newBuffer:lenBufferForVertices_normal type:MDLMeshBufferTypeVertex]; MTKMeshBuffer *mtkMeshBufferForVertices_textureCoordinate = (MTKMeshBuffer *)[metalAllocator newBuffer:lenBufferForVertices_textureCoordinate type:MDLMeshBufferTypeVertex]; // Now fill the Vertex buffers with vertices. NSData *nsData_position = [NSData dataWithBytes:equilateralTriangleVertexData length:lenBufferForVertices_position]; NSData *nsData_normal = [NSData dataWithBytes:equilateralTriangleVertexNormalsData length:lenBufferForVertices_normal]; NSData *nsData_textureCoordinate = [NSData dataWithBytes:equilateralTriangleVertexTexData length:lenBufferForVertices_textureCoordinate]; [mtkMeshBufferForVertices_position fillData:nsData_position offset:0]; [mtkMeshBufferForVertices_normal fillData:nsData_normal offset:0]; [mtkMeshBufferForVertices_textureCoordinate fillData:nsData_textureCoordinate offset:0]; NSArray <id<MDLMeshBuffer>> *arrayOfMeshBuffers = [NSArray arrayWithObjects:mtkMeshBufferForVertices_position, mtkMeshBufferForVertices_normal, mtkMeshBufferForVertices_textureCoordinate, nil]; static uint16_t indices[] = { 0, 1, 2, }; int numIndices = 3; int lenBufferForIndices = numIndices * sizeof(uint16_t); MTKMeshBuffer *mtkMeshBufferForIndices = (MTKMeshBuffer *)[metalAllocator newBuffer:lenBufferForIndices type:MDLMeshBufferTypeIndex]; NSData *nsData_indices = [NSData dataWithBytes:indices length:lenBufferForIndices]; [mtkMeshBufferForIndices fillData:nsData_indices offset:0]; MDLScatteringFunction *scatteringFunction = [MDLPhysicallyPlausibleScatteringFunction new]; MDLMaterial *material = [[MDLMaterial alloc] initWithName:@"plausibleMaterial" scatteringFunction:scatteringFunction]; // Not allowed to create an MTKSubmesh directly, so feed an MDLSubmesh to an MDLMesh, and then use that to load an MTKMesh, which makes the MTKSubmesh from it. MDLSubmesh *submesh = [[MDLSubmesh alloc] initWithName:@"summess" // Hackspeke for @"submesh" indexBuffer:mtkMeshBufferForIndices indexCount:numIndices indexType:MDLIndexBitDepthUInt16 geometryType:MDLGeometryTypeTriangles material:material]; NSArray <MDLSubmesh *> *arrayOfSubmeshes = [NSArray arrayWithObjects:submesh, nil]; MDLMesh *mdlMesh = [[MDLMesh alloc] initWithVertexBuffers:arrayOfMeshBuffers vertexCount:numVertices descriptor:mdlVertexDescriptor submeshes:arrayOfSubmeshes];
Metal³
46,804,603
11
Given the iPhone 6 Plus is downscaling from 1242x2208 to 1080x1920, and UIKit is doing that, are there other ways of drawing to the screen that permit absolute (pixel perfect) drawing without downscaling? I imagine, but am not sure, that OpenGL and Metal can draw pixel perfect graphics on iPhone 6 Plus, but don't know how. And am a little confused as to what Core Animation's coordinate systems mean, as I've read elsewhere that it might be able to do pixel perfect drawing, regardless of the Point system.
For UIKit the non-integral scale factor doesn't usually matter. For OpenGL or Metal, use the new UIScreen nativeScale property to optimally determine the size of your framebuffer or drawable.
Metal³
25,839,514
10
I'm interested in moving away from Xcode and manually compiling Metal shaders in a project for a mixed-language application. I have no idea how to do this, though. Xcode hides the details of shader compilation and subsequent loading into the application at runtime (you just call device.newDefaultLibrary()). Is this even possible, or will I have to use runtime shader compilation for my purposes?
Generally, you have three ways to load a shader library in Metal: Use runtime shader compilation from shader source code via the MTLDevice newLibraryWithSource:options:error: or newLibraryWithSource:options:completionHandler: methods. Although purists may shy away from runtime compilation, this option has minimal practical overhead, and so is completely viable. Your primary practical reason for avoiding this option might be to avoid making your shader source code available as part of your application, to protect your IP. Load compiled binary libraries using the MTLLibrary newLibraryWithFile:error: or newLibraryWithData:error: methods. Follow the instructions in Using Command Line Utilities to Build a Library to create these individual binary libraries at build time. Let Xcode compile your various *.metal files at build time into the default library available through MTLDevice newDefaultLibrary.
Metal³
32,298,719
10
I'm currently trying to draw a graphic that will be animated using Metal in Swift. I have successfully drawn a single frame of my graphic. The graphic is simple, as you can see from this image. What I can't figure out is how to multisample the drawing. There seems to be few references on Metal in general, especially in regards to the Swift syntax. self.metalLayer = CAMetalLayer() self.metalLayer.device = self.device self.metalLayer.pixelFormat = .BGRA8Unorm self.metalLayer.framebufferOnly = true self.metalLayer.frame = self.view.frame self.view.layer.addSublayer(self.metalLayer) self.renderer = SunRenderer(device: self.device, frame: self.view.frame) let defaultLibrary = self.device.newDefaultLibrary() let fragmentProgram = defaultLibrary!.newFunctionWithName("basic_fragment") let vertexProgram = defaultLibrary!.newFunctionWithName("basic_vertex") let pipelineStateDescriptor = MTLRenderPipelineDescriptor() pipelineStateDescriptor.vertexFunction = vertexProgram pipelineStateDescriptor.fragmentFunction = fragmentProgram pipelineStateDescriptor.colorAttachments[0].pixelFormat = .BGRA8Unorm pipelineStateDescriptor.colorAttachments[0].blendingEnabled = true pipelineStateDescriptor.colorAttachments[0].rgbBlendOperation = MTLBlendOperation.Add pipelineStateDescriptor.colorAttachments[0].alphaBlendOperation = MTLBlendOperation.Add pipelineStateDescriptor.colorAttachments[0].sourceRGBBlendFactor = MTLBlendFactor.SourceAlpha pipelineStateDescriptor.colorAttachments[0].sourceAlphaBlendFactor = MTLBlendFactor.SourceAlpha pipelineStateDescriptor.colorAttachments[0].destinationRGBBlendFactor = MTLBlendFactor.OneMinusSourceAlpha pipelineStateDescriptor.colorAttachments[0].destinationAlphaBlendFactor = MTLBlendFactor.OneMinusSourceAlpha The question, how do I smooth these edges? UPDATE: So I have implemented a MultiSample texture and set the sampleCount to 4. I don't notice any difference so I suspect I did something wrong. FINAL: So, in the end, it does appear the multisampling works. Initially, I had vertices wrapping these "rays" with a 0 alpha. This is a trick to make smoother edges. With these vertices, multisampling didn't seem to improve the edges. When I reverted back to have 4 vertices per ray, the multi-sampling improved their edges. let defaultLibrary = self.device.newDefaultLibrary() let fragmentProgram = defaultLibrary!.newFunctionWithName("basic_fragment") let vertexProgram = defaultLibrary!.newFunctionWithName("basic_vertex") let pipelineStateDescriptor = MTLRenderPipelineDescriptor() pipelineStateDescriptor.vertexFunction = vertexProgram pipelineStateDescriptor.fragmentFunction = fragmentProgram pipelineStateDescriptor.colorAttachments[0].pixelFormat = .BGRA8Unorm pipelineStateDescriptor.colorAttachments[0].blendingEnabled = true pipelineStateDescriptor.sampleCount = 4 pipelineStateDescriptor.colorAttachments[0].rgbBlendOperation = MTLBlendOperation.Add pipelineStateDescriptor.colorAttachments[0].alphaBlendOperation = MTLBlendOperation.Add pipelineStateDescriptor.colorAttachments[0].sourceRGBBlendFactor = MTLBlendFactor.SourceAlpha pipelineStateDescriptor.colorAttachments[0].sourceAlphaBlendFactor = MTLBlendFactor.SourceAlpha pipelineStateDescriptor.colorAttachments[0].destinationRGBBlendFactor = MTLBlendFactor.OneMinusSourceAlpha pipelineStateDescriptor.colorAttachments[0].destinationAlphaBlendFactor = MTLBlendFactor.OneMinusSourceAlpha let desc = MTLTextureDescriptor() desc.textureType = MTLTextureType.Type2DMultisample desc.width = Int(self.view.frame.width) desc.height = Int(self.view.frame.height) desc.sampleCount = 4 desc.pixelFormat = .BGRA8Unorm self.sampletex = self.device.newTextureWithDescriptor(desc) // When rendering let renderPassDescriptor = MTLRenderPassDescriptor() renderPassDescriptor.colorAttachments[0].texture = sampletex renderPassDescriptor.colorAttachments[0].resolveTexture = drawable.texture renderPassDescriptor.colorAttachments[0].loadAction = .Clear renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColor(red: 23/255.0, green: 26/255.0, blue: 31/255.0, alpha: 0.0) renderPassDescriptor.colorAttachments[0].storeAction = .MultisampleResolve let commandBuffer = commandQueue.commandBuffer() let renderEncoder = commandBuffer.renderCommandEncoderWithDescriptor(renderPassDescriptor) renderEncoder.setRenderPipelineState(pipelineState)
This is substantially simpler with MTKView (just set sampleCount to your desired number of MSAA samples on the view and the pipeline descriptor), but here are the steps for rolling your own. When creating a render pipeline state, set the sampleCount of your render pipeline state descriptor to your multisample count. At startup, and whenever the layer resizes, create a multisample texture with dimensions equal to your layer's drawable size by creating a texture descriptor whose textureType is MTLTextureType2DMultisample and whose sampleCount is your multisample count. If you are using a depth and/or stencil buffer, set these properties on their descriptors as well. When rendering, set the MSAA texture as the texture of the render pass descriptor's primary color attachment, and set the current drawable's texture as the resolveTexture. Set the storeAction of the color attachment to MTLStoreActionMultisampleResolve so that the MSAA texture is resolved into the renderbuffer at the end of the pass. Draw and present as you normally would.
Metal³
36,227,209
10
What's the correct way to generate a MTLTexture backed by a CVPixelBuffer? I have the following code, but it seems to leak: func PixelBufferToMTLTexture(pixelBuffer:CVPixelBuffer) -> MTLTexture { var texture:MTLTexture! let width = CVPixelBufferGetWidth(pixelBuffer) let height = CVPixelBufferGetHeight(pixelBuffer) let format:MTLPixelFormat = .BGRA8Unorm var textureRef : Unmanaged<CVMetalTextureRef>? let status = CVMetalTextureCacheCreateTextureFromImage(nil, videoTextureCache!.takeUnretainedValue(), pixelBuffer, nil, format, width, height, 0, &textureRef) if(status == kCVReturnSuccess) { texture = CVMetalTextureGetTexture(textureRef!.takeUnretainedValue()) } return texture }
Ah, I was missing: textureRef?.release()
Metal³
37,445,052
10
I am new to MetalKit and trying to convert this tutorial from playground back to OSX app: import MetalKit public class MetalView: MTKView { var queue: MTLCommandQueue! = nil var cps: MTLComputePipelineState! = nil required public init(coder: NSCoder) { super.init(coder: coder) device = MTLCreateSystemDefaultDevice() registerShaders() } override public func drawRect(dirtyRect: NSRect) { super.drawRect(dirtyRect) if let drawable = currentDrawable { let command_buffer = queue.commandBuffer() let command_encoder = command_buffer.computeCommandEncoder() command_encoder.setComputePipelineState(cps) command_encoder.setTexture(drawable.texture, atIndex: 0) let threadGroupCount = MTLSizeMake(8, 8, 1) let threadGroups = MTLSizeMake(drawable.texture.width / threadGroupCount.width, drawable.texture.height / threadGroupCount.height, 1) command_encoder.dispatchThreadgroups(threadGroups, threadsPerThreadgroup: threadGroupCount) command_encoder.endEncoding() command_buffer.presentDrawable(drawable) command_buffer.commit() } } func registerShaders() { queue = device!.newCommandQueue() do { let library = device!.newDefaultLibrary()! let kernel = library.newFunctionWithName("compute")! cps = try device!.newComputePipelineStateWithFunction(kernel) } catch let e { Swift.print("\(e)") } } } I got an error at the line: command_encoder.setTexture(drawable.texture, atIndex: 0) failed assertion `frameBufferOnly texture not supported for compute.' How can I resolve this?
If you want to write to a drawable's texture from a compute function, you'll need to tell the MTKView that it should configure its layer not to be framebuffer-only: metalView.framebufferOnly = false With this value set to false, your drawable will give you a texture with the shaderWrite usage flag set, which is required when writing a texture from a shader function.
Metal³
39,206,935
10
With every Implementation of Metal based ImageView I'm facing the same problem let targetTexture = currentDrawable?.texture else{ return } Value of type 'MTLDrawable' has no member 'texture' Seems like apple has changed some metal api here is the full function I'm tryong to use: func renderImage() { guard let image = image, let targetTexture = currentDrawable?.texture else{return} let commandBuffer = commandQueue.makeCommandBuffer() let bounds = CGRect(origin: CGPoint.zero, size: drawableSize) let originX = image.extent.origin.x let originY = image.extent.origin.y let scaleX = drawableSize.width / image.extent.width let scaleY = drawableSize.height / image.extent.height let scale = min(scaleX, scaleY) let scaledImage = image .applying(CGAffineTransform(translationX: -originX, y: -originY)) .applying(CGAffineTransform(scaleX: scale, y: scale)) ciContext.render(scaledImage, to: targetTexture, commandBuffer: commandBuffer, bounds: bounds, colorSpace: colorSpace) commandBuffer.present(currentDrawable!) commandBuffer.commit() }
I had the same problem after performing a system and xcode update. Turns out during the update process, xcode switched the build target to the simulator. Once I switched the target back to the device it all compiled again.
Metal³
41,916,306
10
I'm using a MTKView to draw Metal content. It's configured as follows: mtkView = MTKView(frame: self.view.frame, device: device) mtkView.colorPixelFormat = .bgra8Unorm mtkView.delegate = self mtkView.sampleCount = 4 mtkView.isPaused = true mtkView.enableSetNeedsDisplay = true setFrameSize is overriden to trigger a redisplay. Whenever the view resizes it scales its old content before it redraws everything. This gives a jittering feeling. I tried setting the contentGravity property of the MTKView's layer to a non-resizing value, but that totally messes up the scale and position of the content. It seems MTKView doesn't want me to fiddle with that parameter. How can I make sure that during a resize the content is always properly redrawn?
In my usage of Metal and MTKView, I tried various combinations of presentsWithTransaction and waitUntilScheduled without success. I still experienced occasional frames of stretched content in between frames of properly rendered content during live resize. Finally, I dropped MTKView altogether and made my own NSView subclass that uses CAMetalLayer and resize looks good now (without any use of presentsWithTransaction or waitUntilScheduled). One key bit is that I needed to set the layer's autoresizingMask to get the displayLayer method to be called every frame during window resize. Here's the header file: #import <Cocoa/Cocoa.h> @interface MyMTLView : NSView<CALayerDelegate> @end Here's the implementation: #import <QuartzCore/CAMetalLayer.h> #import <Metal/Metal.h> @implementation MyMTLView - (id)initWithFrame:(NSRect)frame { if (!(self = [super initWithFrame:frame])) { return self; } // We want to be backed by a CAMetalLayer. self.wantsLayer = YES; // We want to redraw the layer during live window resize. self.layerContentsRedrawPolicy = NSViewLayerContentsRedrawDuringViewResize; // Not strictly necessary, but in case something goes wrong with live window // resize, this layer placement makes it more obvious what's going wrong. self.layerContentsPlacement = NSViewLayerContentsPlacementTopLeft; return self; } - (CALayer*)makeBackingLayer { CAMetalLayer* metalLayer = [CAMetalLayer layer]; metalLayer.device = MTLCreateSystemDefaultDevice(); metalLayer.delegate = self; // *Both* of these properties are crucial to getting displayLayer to be // called during live window resize. metalLayer.autoresizingMask = kCALayerHeightSizable | kCALayerWidthSizable; metalLayer.needsDisplayOnBoundsChange = YES; return metalLayer; } - (CAMetalLayer*)metalLayer { return (CAMetalLayer*)self.layer; } - (void)setFrameSize:(NSSize)newSize { [super setFrameSize:newSize]; self.metalLayer.drawableSize = newSize; } - (void)displayLayer:(CALayer*)layer { // Do drawing with Metal. } @end For reference, I do all my Metal drawing in MTKView's drawRect method.
Metal³
45,375,548
10
In Metal shading language, what is the exact difference between read and sample function to access texture pixels, and which one should be used when?
A few differences: You can sample outside the bounds of the texture. But you should not read outside the texture. Sampling can use normalized coordinates (between 0 and 1). Reading always uses pixel coordinates. Samplers can interpolate between pixel values (for example if you're sampling in between two pixels). Reading always gives you the exact pixel value.
Metal³
49,820,430
10
I'm trying to implement voxel cone tracing in Metal. One of the steps in the algorithm is to voxelize the geometry using a geometry shader. Metal does not have geometry shaders so I was looking into emulating them using a compute shader. I pass in my vertex buffer into the compute shader, do what a geometry shader would normally do, and write the result to an output buffer. I also add a draw command to an indirect buffer. I use the output buffer as the vertex buffer for my vertex shader. This works fine, but I need twice as much memory for my vertices, one for the vertex buffer and one for the output buffer. Is there any way to directly pass the output of the compute shader to the vertex shader without storing it in an intermediate buffer? I don't need to save the contents of the output buffer of the compute shader. I just need to give the results to the vertex shader. Is this possible? Thanks EDIT Essentially, I'm trying to emulate the following shader from glsl: #version 450 layout(triangles) in; layout(triangle_strip, max_vertices = 3) out; layout(location = 0) in vec3 in_position[]; layout(location = 1) in vec3 in_normal[]; layout(location = 2) in vec2 in_uv[]; layout(location = 0) out vec3 out_position; layout(location = 1) out vec3 out_normal; layout(location = 2) out vec2 out_uv; void main() { vec3 p = abs(cross(in_position[1] - in_position[0], in_position[2] - in_position[0])); for (uint i = 0; i < 3; ++i) { out_position = in_position[i]; out_normal = in_normal[i]; out_uv = in_uv[i]; if (p.z > p.x && p.z > p.y) { gl_Position = vec4(out_position.x, out_position.y, 0, 1); } else if (p.x > p.y && p.x > p.z) { gl_Position = vec4(out_position.y, out_position.z, 0, 1); } else { gl_Position = vec4(out_position.x, out_position.z, 0, 1); } EmitVertex(); } EndPrimitive(); } For each triangle, I need to output a triangle with vertices at these new positions instead. The triangle vertices come from a vertex buffer and is drawn using an index buffer. I also plan on adding code that will do conservative rasterization (just increase the size of the triangle by a little bit) but it's not shown here. Currently what I'm doing in the Metal compute shader is using the index buffer to get the vertex, do the same code in the geometry shader above, and outputting the new vertex in another buffer which I then use to draw.
Here's a very speculative possibility depending on exactly what your geometry shader needs to do. I'm thinking you can do it sort of "backwards" with just a vertex shader and no separate compute shader, at the cost of redundant work on the GPU. You would do a draw as if you had a buffer of all of the output vertices of the output primitives of the geometry shader. You would not actually have that on hand, though. You would construct a vertex shader that would calculate them in flight. So, in the app code, calculate the number of output primitives and therefore the number of output vertices that would be produced for a given count of input primitives. Do a draw of the output primitive type with that many vertices. You would not provide a buffer with the output vertex data as input to this draw. You would provide the original index buffer and original vertex buffer as inputs to the vertex shader for that draw. The shader would calculate from the vertex ID which output primitive it's for, and which vertex of that primitive (e.g. for a triangle, vid / 3 and vid % 3, respectively). From the output primitive ID, it would calculate which input primitive would have generated it in the original geometry shader. The shader would look up the indices for that input primitive from the index buffer and then the vertex data from the vertex buffer. (This would be sensitive to the distinction between a triangle list vs. triangle strip, for example.) It would apply any pre-geometry-shader vertex shading to that data. Then it would do the part of the geometry computation that contributes to the identified vertex of the identified output primitive. Once it has calculated the output vertex data, you can apply any post-geometry-shader vertex shading(?) that you want. The result is what it would return. If the geometry shader can produce a variable number of output primitives per input primitive, well, at least you have a maximum number. So, you can draw the maximum potential count of vertices for the maximum potential count of output primitives. The vertex shader can do the computations necessary to figure out if the geometry shader would have, in fact, produced that primitive. If not, the vertex shader can arrange for the whole primitive to be clipped away, either by positioning it outside of the frustum or using a [[clip_distance]] property of the output vertex data. This avoids ever storing the generated primitives in a buffer. However, it causes the GPU to do some of the pre-geometry-shader vertex shader and geometry shader calculations repeatedly. It will be parallelized, of course, but may still be slower than what you're doing now. Also, it may defeat some optimizations around fetching indices and vertex data that may be possible with more normal vertex shaders. Here's an example conversion of your geometry shader: #include <metal_stdlib> using namespace metal; struct VertexIn { // maybe need packed types here depending on your vertex buffer layout // can't use [[attribute(n)]] for these because Metal isn't doing the vertex lookup for us float3 position; float3 normal; float2 uv; }; struct VertexOut { float3 position; float3 normal; float2 uv; float4 new_position [[position]]; }; vertex VertexOut foo(uint vid [[vertex_id]], device const uint *indexes [[buffer(0)]], device const VertexIn *vertexes [[buffer(1)]]) { VertexOut out; const uint triangle_id = vid / 3; const uint vertex_of_triangle = vid % 3; // indexes is for a triangle strip even though this shader is invoked for a triangle list. const uint index[3] = { indexes[triangle_id], index[triangle_id + 1], index[triangle_id + 2] }; const VertexIn v[3] = { vertexes[index[0]], vertexes[index[1]], vertexes[index[2]] }; float3 p = abs(cross(v[1].position - v[0].position, v[2].position - v[0].position)); out.position = v[vertex_of_triangle].position; out.normal = v[vertex_of_triangle].normal; out.uv = v[vertex_of_triangle].uv; if (p.z > p.x && p.z > p.y) { out.new_position = float4(out.position.x, out.position.y, 0, 1); } else if (p.x > p.y && p.x > p.z) { out.new_position = float4(out.position.y, out.position.z, 0, 1); } else { out.new_position = float4(out.position.x, out.position.z, 0, 1); } return out; }
Metal³
50,557,224
10
I'm trying to get multisampling working with MTKView. I have an MTKView with a delegate. I set the view's sampleCount property to 4. I create a pipeline state descriptor with the rasterSampleCount set to 4, and use that to make a render pipeline state that I use when rendering. In the delegate's draw(in:) method, I create a render pass descriptor by getting the view's current render pass descriptor and setting the storeAction to multisampleResolve. I've also set tried storeAndMultisampleResolve to no avail. I have created a resolve texture for the render pass descriptor, and it is the same width and height as the view and the same pixel format. Given the above, I get a full red frame during rendering. I have used the metal debugger to look at the textures, and both the view's texture and the resolve texture have the correct rendering in them. I'm on an AMD machine where a fully red texture often indicates an uninitialized texture. Is there anything I need to do to get the rendering to go to the screen? Here's how I'm setting up the view, pipeline state, and resolve texture: metalView = newMetalView metalView.sampleCount = 4 metalView.clearColor = MTLClearColor(red: 0.0, green: 0.0, blue: 0.0, alpha: 0.0) device = newMetalView.device! let metalLibrary = device.makeDefaultLibrary()! let vertexFunction = metalLibrary.makeFunction(name: "vertexShader") let fragmentFunction = metalLibrary.makeFunction(name: "fragmentShader") let pipelineStateDescriptor = MTLRenderPipelineDescriptor.init() pipelineStateDescriptor.label = "Particle Renderer" pipelineStateDescriptor.vertexFunction = vertexFunction pipelineStateDescriptor.fragmentFunction = fragmentFunction pipelineStateDescriptor.colorAttachments [ 0 ].pixelFormat = metalView.colorPixelFormat pipelineStateDescriptor.rasterSampleCount = 4 do { try pipelineState = device.makeRenderPipelineState(descriptor: pipelineStateDescriptor) } catch { NSLog("Unable to create pipeline state") pipelineState = nil } let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: metalView.colorPixelFormat, width: Int(metalView.bounds.width), height: Int(metalView.bounds.height), mipmapped: false) resolveTexture = device.makeTexture(descriptor: textureDescriptor)! And here's how I'm drawing: let commandBuffer = commandQueue.makeCommandBuffer() commandBuffer?.label = "Partcle Command Buffer" let renderPassDescriptor = metalView.currentRenderPassDescriptor renderPassDescriptor?.colorAttachments[0].clearColor = MTLClearColorMake(0.0, 0.0, 0.0, 0.0) renderPassDescriptor?.colorAttachments[0].loadAction = MTLLoadAction.clear renderPassDescriptor?.colorAttachments[0].storeAction = MTLStoreAction.multisampleResolve renderPassDescriptor?.colorAttachments[0].resolveTexture = resolveTexture let renderEncoder = commandBuffer?.makeRenderCommandEncoder(descriptor: renderPassDescriptor!) renderEncoder?.label = "Particle Render Encoder" renderEncoder?.setViewport(MTLViewport(originX: 0.0, originY: 0.0, width: Double(viewportSize.x), height: Double(viewportSize.y), znear: -1.0, zfar: 1.0)) renderEncoder?.setRenderPipelineState(pipelineState!); Then I make my draw calls, and then finish up by calling: renderEncoder?.endEncoding() commandBuffer?.present(metalView.currentDrawable!) commandBuffer?.commit() Here's what the debugger shows is in my textures: Oddly, while doing that debugging, I accidentally hid Xcode, and for 1 frame, the view showed the correct texture.
What's the initial configuration of renderPassDescriptor (as returned from metalView.currentRenderPassDescriptor? I believe you want the color attachment's texture set to metalView.multisampleColorTexture and its resolveTexture set to metalView.currentDrawable.texture. That is, it should do the primary, multi-sampled rendering to the multi-sample texture and then that gets resolved to the drawable texture to actually draw it in the view. I don't know if MTKView sets up its currentRenderPassDescriptor like that automatically when there's a sampleCount > 1. Ideally, it would.
Metal³
52,846,343
10
I'm experimenting with SwiftUI and Metal. I've got 2 windows, one with various lists and controls and the other a Metal window. I had the slider data updating the Metal window but when I moved the slider the FPS dropped from 60 to around 25. I removed all links between the views and moving the sliders still drops the FPS in the metal window. It seems that the list views slow down the FPS as well. I create the metal window on startup using: metalWindow = NSWindow(contentRect: NSRect(x: 0, y: 0, width: 480, height: 300), styleMask: [.titled, .closable, .miniaturizable, .resizable, .fullSizeContentView], backing: .buffered, defer: false) metalWindow.center() metalWindow.setFrameAutosaveName("Metal Window") metalWindow.makeKeyAndOrderFront(nil) mtkView = MTKView() mtkView.translatesAutoresizingMaskIntoConstraints = false metalWindow.contentView!.addSubview(mtkView) metalWindow.contentView!.addConstraints(NSLayoutConstraint.constraints(withVisualFormat: "|[mtkView]|", options: [], metrics: nil, views: ["mtkView" : mtkView!])) metalWindow.contentView!.addConstraints(NSLayoutConstraint.constraints(withVisualFormat: "V:|[mtkView]|", options: [], metrics: nil, views: ["mtkView" : mtkView!])) device = MTLCreateSystemDefaultDevice()! mtkView.device = device mtkView.colorPixelFormat = .bgra8Unorm commandQueue = device.makeCommandQueue()! mtkView.delegate = self mtkView.sampleCount = 4 mtkView.clearColor = MTLClearColor(red: 0.0, green: 0.5, blue: 1.0, alpha: 1.0) mtkView.depthStencilPixelFormat = .depth32Float The control window is a SwiftUI view: struct ControlPanelView: View { @ObservedObject var controlPanel = ControlPanel() @State private var cameraPos = Floatx3() @State private var lightingPos = Floatx3() var body: some View { HStack{ VStack{ VStack{ Text("Objects") List(self.controlPanel.objectFiles) { object in Text(object.name) } } VStack{ Text("Textures") List(self.controlPanel.textureFiles) { texture in HStack { Image(nsImage: texture.image).resizable() .frame(width: 32, height: 32) Text(texture.name) } } } } VStack{ HStack{ XYZControl(heading: "Camera Controls", xyzPos: $cameraPos) XYZControl(heading: "Lighting Controls", xyzPos: $lightingPos) }.padding() HStack{ Text("Frames Per Second:") //Text(String(renderer.finalFpsCount)) } } }.border(Color.red).padding() } } struct XYZControl: View { var heading : String @Binding var xyzPos : Floatx3 var body: some View { VStack{ Text(heading).padding(.bottom, 5.0) PositionSlider(sliderValue: $xyzPos.x, label: "X", minimum: -15.0, maximum: 15.0) PositionSlider(sliderValue: $xyzPos.y, label: "Y", minimum: -15.0, maximum: 15.0) PositionSlider(sliderValue: $xyzPos.z, label: "Z", minimum: -15.0, maximum: 15.0) }.border(Color.yellow).padding(.leading) } } struct PositionSlider: View { @Binding var sliderValue : Float var label : String var minimum : Float var maximum : Float static let posFormatter: NumberFormatter = { let formatter = NumberFormatter() formatter.numberStyle = .decimal formatter.maximumFractionDigits = 3 return formatter }() var body: some View { VStack{ Text(label) HStack{ Text("\(Self.posFormatter.string(from:NSNumber(value: minimum))!)") Slider(value: $sliderValue, in: minimum ... maximum) Text("\(Self.posFormatter.string(from:NSNumber(value: maximum))!)") }.padding(.horizontal).frame(width: 150.0, height: 15.0, alignment: .leading) Text("\(Self.posFormatter.string(from:NSNumber(value: sliderValue))!)") }.border(Color.white) } } Can anyone help with why the frame rate drops? I removed the metal window and incorporated it into the control window using NSViewRepresentable so it looks like this now: This still has the same problem. The render code (not doing much!). It still slows down even when not rendering the teapot. func draw(in view: MTKView) { if let commandBuffer = commandQueue.makeCommandBuffer() { commandBuffer.label = "Frame command buffer" if let renderEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: view.currentRenderPassDescriptor!) { renderEncoder.label = "render encoder" renderEncoder.endEncoding() } commandBuffer.present(view.currentDrawable!) commandBuffer.addCompletedHandler { completedCommandBuffer in self.computeFPS() } commandBuffer.commit() } }
As 0xBFE1A8 said SwiftUI was blocking the main thread. I moved the draw function into a callback setup using CVDisplayLink .global(qos: .userInteractive). This stops the rendering slowing down when the sliders are moved.
Metal³
59,212,294
10
Recently I'm looking at Ansible and want to use it in projects. And also there's another tool Rundeck can be used to do all kinds of Operations works. I have experience with neither tool and this is my current understanding about them: Similar points Both tools are agent-less and use SSH to execute commands on remote servers Rundeck's main concept is Node, the same as Ansible's inventory, the key idea is to define/manage/group the target servers Rundeck can execute ad-hoc commands on selected nodes, Ansible can also do this very conveniently. Rundeck can define workflow and do the execution on selected nodes, this can be done with Ansible by writing playbook Rundeck can be integrated with CI tool like Jenkins to do deploy work, we can also define a Jenkins job to run ansible-playbook to do the deploy work Different points Rundeck has the concept of Job, which Ansible does not Rundeck has Job Scheduler, which Ansible can only achieve this with other tools like Jenkins or Cron tasks Rundeck has Web UI by default for free, but you have to pay for Ansible Tower Seems both Ansible and Rundeck can be used to do configuration/management/deployment work, maybe in a different way. So my questions are: Are these two complementary tools or they are designed for different purposes? If they're complementary tools, why is Ansibl only compared to tools like Chef/Puppet/Slat but not with Rundeck? If they're not why they have so many similar functionalities? We're already using Jenkins for CI, to build a Continuous-Delivery pipeline, which tool(Ansible/Rundeck) is a better idea to use to do the deployment? If they can be used together, what's the best practice? Any suggestions and experience sharing are greatly appreciated.
TL;DR - given your environment of Jenkins for CI/CD I'd recommend using just Ansible. You've spotted that there is sizeable cross-over between Ansible & Rundeck, so it's probably best to concentrate on where each product focuses, it's style and use. Focus I believe Rundeck's focus is in enabling sysadmins to build a (web-based) self-service portal that's accessible to both other sysadmins and, potentially, less "technical"/sysadmin people. Rundeck's website says "Turn your operations procedures into self-service jobs. Safely give others the control and visibility they need.". Rundeck also feels like it has a more 'centralised' view on the world: you load the jobs into a database and that's where they live. To me, Ansible is for devops - so building out and automating deployments of (self-built) applications in a way such that they are highly-repeatable. I'd argue that Ansible comes more focussed for software development houses that build their own products: Ansible 'playbooks' are text files, so normally stored into source control and normally alongside the app that the playbooks will deploy. Job creation focus With Rundeck you typically create jobs via the web UI. With Ansible you create tasks/playbooks in files via a text editor. Operation/Task/Job Style Rundeck by default is imperative - you write scripts that are executed (via SSH). Ansible is both imperative (i.e. execute bash statements) but also declarative, so in some cases, say, starting Apache you can use the service task to make sure that it's running. This is closer to other configuration management tools like Puppet and Chef. Complex jobs / scripts Rundeck has the ability to run another job by defining a step in the Job's workflow but from experience this feels like a tacked-on addition than a serious top-level feature. Ansible is designed to create complex operations; running/including/etc are top-level features. How it runs Rundeck is a server app. If you want to run jobs from somewhere else (like CI) you'll either need to call out to the cli or make an API call. Straight Ansible is command-line. Proviso Due to the cross-over and overall flexibility of Rundeck and Ansible you could achieve all of the above in each. You can achieve version control of your Rundeck jobs by exporting them to YAML or XML and checking them into source control. You can get a web UI in Ansible using Tower. etc. etc. etc. Your questions: Complementary tools? I could envision a SaaS shop using both: one might use Ansible to perform all deployment actions and then use Rundeck to perform one-off, adhoc jobs. However, while I could envision it I wouldn't recommend that as a starting point. Me, I'd start with just Ansible and see how far I get. I'd only layer in Rundeck later on if I discovered that I really, really need to run one-offs. CI/CD Ansible: your environment sounds more like a software house where you're deploying your own app. It should probably be repeatable (especially as you're going Continuous Delivery) so you'll want your deploy scripts in source control. You'll want simplicity and Ansible is "just text files". I hope you will also want your devs to be able to run things on their machines (right?), Ansible is decentralised. Used together (for CI/CD) Calling Rundeck from Ansible, no. Sure, it would be possible but I'm struggling to come up with good reasons. At least, not very specialised specific-to-a-particular-app-or-framework reasons. Calling Ansible from Rundeck, yes. I could envision someone first building out some repeatable adhoc commands in Ansible. Then I could see there being a little demand for being able to call this without a command line (say: non technical users). But, again, this is getting specific to your environment.
Rundeck
31,152,102
48
I have a script I want to be available globally. I've started it with the standard hashbang: #! /usr/bin/env python And linked it into the bin directory of my virtualenv: ~/environments/project/env/bin/myscript And added that directory to my path. When I run the command: myscript I get an import error with one of the libraries. However, if I activate the virtual environment and run the script, it works as expected. I've ruled out a problem with the symlink (I've also tried just moving the script inside the bin folder). I've also tried running the script with python python ~/environments/project/env/bin/myscript Previously I was using a script that activated the environment and then ran my script, but I was under the impression that script run from this folder should run with the virtualenv's interpretor and site-packages. Any ideas of why this might not be working or some ways I could debug this?
Putting the script into the bin of your virtualenv, and then adding that bin location to your global PATH will not automatically source your virtualenv. You do need to source it first to make it active. All that your system knows is to check that extra path for the executable and run it. There isn't anything in that script indicating a virtualenv. You could, however, hardcode the she-bang line to your virtualenv python, in which case the site-packages will end up on the path: #!/Users/foo/environments/project/env/bin/python Or another option is to simply create a tiny bash wrapper that calls your original pythons script, which will allow you to leave your original script with a generic she-bang.. So if myscript.py is: #!/usr/bin/env python ... Then you can make a myscript : #!/bin/bash /Users/foo/environments/project/env/bin/python myscript.py When you do myscript, it will explicitly call your python script with the interpreter you set up.
Rundeck
11,963,019
47