[{"id": "000101893", "problem_statement": "Why there is continuous increase of Keq value when product concentration is reduced?\nWith increase in carbon number, Keq. is also increasing, from the stream results it is evident that concentration is decreasing with increase in carbon number:", "solution": "When comparing equilibrium constants, note that the exponents are different for each reaction.\nThe equation used to calculate Keq in Aspen Plus as follows:\n\nKeq = product (fugacity_i^nu_i) / reactant (fugacity_i^nu_i)\n\nwhere fugacity_i = fugacity of component i (in atm)\nnu_i = stoichiometric coefficient of component i\n\nSo, when you compare reactions such as\n\n 4) Temp. approach 4 CO + 9 H2 --> C4 + 4 H2O\n 5) Temp. approach 5 CO + 11 H2 --> C5 + 5 H2O\n\nIn reaction 4), we have fC4 * fH2O^4 / (fCO^4 * fH2^9), while for reaction 5) we have fC5 * fH2O^5 / (fCO^5 * fH2^11)\n\nThis is the reason you see continuous increase in Keq value.\n\nDue the change of exponents, it's not correct to simply compare the numerical value of the equilibrium constant / concentration of reactions\n\nUser can formulate the reactions as \"successive\" reactions, so that they have more similar stoichiometries:\n Temp. approach CO + 2 H2 + C3 --> C4 + H2O\n Temp. approach CO + 2 H2 + C4 --> C5 + H2O\n \nIn this case, you'll see the equilibrium constants have now more similar values, but it is still not straightforward to compare.", "keywords": "Requil, Keq, Product concentration", "reference": "AspenTech: Knowledge Base"}, {"id": "000101901", "problem_statement": "How to calculate dew point of a material stream with respect to relative humidity and how to monitor response of dew point with change to relative humidity?", "solution": "Relative humidity cannot be directly entered in a material stream, a unit operation \u201cStream Saturator\u201d to be used.\nSteam Saturator calculates the amount of water needed to saturate the input stream and reach a specific relative humidity.\n\n \n\n Specify Relative humidity (RH%) values under Parameter tab in Stream saturation block.\n\n\n\n\n\n\nDew point can be calculated by using a Balance operation from manipulator. \nSpecify Vapor Fraction=1 in dew point stream, to get dew point temperature and retain component mole flow which can be specified in balance block under parameters\n\n\nThe resulting temp in dew point stream is your dew point value for specified inputs.\n\nTo monitor the response of dew point temperature with change to relative humidity, a case study can be performed by specifying dew point as dependent variables and relative humidity as independent.\n\n\n\n Run your case study under tab \u201cCase study setup\u201d to view the results.", "keywords": "Dew point, Relative humidity, Case study, Saturation block, Balance", "reference": "AspenTech: Knowledge Base"}, {"id": "000101892", "problem_statement": "How to add Mass Fraction of Streams as a variable in EO variable list which is not present under EO variables tab?", "solution": "Generally, in Aspen plus EO modelling mass fraction variable is not directly present under EO variable tab, so in order to review mass fraction results under EO variable tab we can follow the below steps:\nAdd an analyzer block under manipulators.\nIn Block Options --> EO Options --> Additional Variables---> enable \"mass fraction\" option in the analyzer block.\nNow refer your analyzer block results into EO variables to get mass fraction values.", "keywords": "Aspen plus EO, Mass fraction, EO variable list", "reference": null}, {"id": "000101785", "problem_statement": "After upgrading to V14 APC and applying the latest Desktop EP3, some customers have reported facing a strange issue with their DMC3 Builder. When they import an application and navigate to the Master model, the page becomes green blank, with no models and no options in the top ribbon. The Home tab is also missing.", "solution": "To resolve the issue of a blank page in the Master model in DMC3 Builder after upgrading to V14 APC, follow the steps below:\n\n Ensure that the ACESAppTemplate.xml file is up to date:\nThe issue may be caused by an outdated ACESAppTemplate.xml file. (could be found under the following paths: C:\\Program Files (x86)\\AspenTech\\RTE\\Vxx\\APC\n C:\\Program Files (x86)\\AspenTech\\APC\\Vxx\\Builder\nUninstall the EP3 patch and then reinstall it to ensure that the ACESAppTemplate.xml file is updated to the latest version.\n Remove the APCDesktop.UserPreferences.dat file:\nThe APCDesktop.UserPreferences.dat file stores user-specific preferences and settings for DMC3 Builder.\nLocate the file in the appropriate directory, which may vary depending on the operating system. (could be found under the following path: C:\\Users\\[username]\\AppData\\Roaming\\AspenTech\\APC)\nDelete the APCDesktop.UserPreferences.dat file.\nLaunch DMC3 Builder again, and the file will be recreated with default settings.\n Verify if the model view populates correctly:\nOpen the problematic application file in DMC3 Builder after performing the above steps.\nCheck if the model view now populates with the expected models.\nIf the issue persists, consider reaching out to customer support for further assistance.\n\nNote: It is important to back up any critical files before making changes or modifications to ensure data integrity and minimize the risk of unintended consequences.", "keywords": "DMC3 Builder, Desktop, EP3, blank page, Master model, ACESAppTemplate.xml, APCDesktop.UserPreferences.dat", "reference": null}, {"id": "000078766", "problem_statement": "What parameters can be specified for TSK_SQL_SERVER?", "solution": "TSK_SQL_SERVER is the Aspen InfoPlus.21 external task that processes the queries generated from the Aspen SQLplus Query Writer. There are three parameters available for TSK_SQL_SERVER, affecting permissions on the IP.21 database. They are:\n n (bypass security)\nr (read-only)\nc (no system commands)\n To apply parameters to the task, add the appropriate parameter(s) following the port specification in the command line field of TSK_SQL_SERVER. For example:\n 10014 c\n or\n 10014 rc\n or\n 10014 n\n Please note, specifying these parameters affects all SQLplus connections, regardless of the user. Configuring these permissions through the Aspen Framework Security Manager is the preferred method, as permissions can be granted by user role. For more information on SQLplus application-based security, please see the \"Assigning Secured Tasks\" topic in the SQLplus help files.\nFor further information about configuring the read-only parameter for TSK_SQL_SERVER, please see solution 000071792: How to create a read only connection for SQLplus users\n KeyWords\nInfoPlus.21 Manager\npermissions\nwrite\nread-only\noperating system commands\n111933-2", "keywords": null, "reference": null}, {"id": "000097254", "problem_statement": "Which record definition should I use for the creation of SPC tags?", "solution": "Q_XBARDef and Q_XBARSDef:\nThese records do not historize the CUSUM HI and CUSUM LOW value for each subgroup.\nIn a Q_XBARDef record, the Mean value and the Range values are historized.\nIn a Q_XBARSDef records the Subgroup Mean and Standard Deviation values are historized.\n Q_XBARCDef and Q_XBARCSDef:\nThese records historize the CUSUM HI and CUSUM LOW values in addition to the Mean and Range/Standard Deviation values.\nIn a Q_XBARCDef record, the Mean, Range, CUSUM HI ,CUSUM LOW are historized.\nIn a Q_XBARCSDef record, the Mean, Standard Deviation, CUSUM HI, CUSUM LOW are historized.\n Q_XBAR21Def and Q_XBAR21SDef:\nThese records also historize CUSUM HI and CUSUM LOW values in addition to the Mean and Range/Standard Deviation values.\nRepeat area fields remain the same as Q_XBARCDef and Q_XBARCSDef.\nThey have an additional capability of calculating enhanced Standard Deviation using the Mean Successive Square Difference (MSSD) method.\n\n\nAdditional Info:\n\nThere is a tool that can be used in the creation of some (not all) SPC records. The Aspen Real-time SPC Analyzer Configuration Wizard executable is located here:\n\n C:\\Program Files (x86)\\AspenTech\\APEx\\Q\n\nand the executable is:\n\n QConfigUtility.exe\n\n It's also mentioned in the Aspen Real-Time SPC Analyzer User's Guide. An excerpt is included below:\n Aspen Real-Time SPC Analyzer\nConfiguration Utility (Wizard)\n To run the Aspen Real-Time SPC Analyzer Configuration Utility (wizard), use\nthe Windows Start menu and select Aspen Manufacturing Suite | Aspen\nInfoPlus.21 | Real-Time SPC Configuration.\n\nNote: Avoid mistaking a similar selection found in the Windows Start menu:\nAspen Manufacturing Suite | Real-Time SPC Configuration. This\ndisplays a command line, server-side configuration tool used for enabling\nReal-Time SPC operations in the InfoPlus.21 database.\n\nAspen Real-Time SPC Analyzer Configuration Utility displays a \u2018wizard\u2019\nsequence of seven dialog boxes so that all of the parameters for defining a\nnew Real-Time SPC record can be specified:\n\n1 Server Selection \u2013 Select the server where the Real-Time SPC record will\nbe stored.\n\n2 SPC Variable Identification \u2013 Name and describe the new Real-Time\nSPC record.\n\nNote: If you enter an SPC Variable Identification (name) that already\nexists in the database, a dialog is displayed enabling you to either choose\nanother name or update the existing Real-Time SPC record. If you choose\nto update the existing record, all of the subsequent wizard dialogs will be\npopulated with data from the existing record.\n\n3 SPC Variable Type \u2013 Choose which type of Real-Time SPC record you\ncreate.\n\n4 SPC Input & Display Properties \u2013 Choose what data the Real-Time SPC\nrecord processes and how it is displayed.\n\n5 SPC Variable Control Properties \u2013 Select how calculations are\nperformed on the data.\n\n6 SPC History Properties \u2013 Select additional data to store in History.\n\n7 Summary and Commit \u2013 Review the parameters for the new Real-Time\nSPC record and save it.\n\nNote: For more detail, see the \u201cAspen Real-Time SPC Analyzer Configuration\nWizard\u201d topics in the Real-Time SPC for Process Explorer Help, or click the\nHelp button in each dialog box of the wizard sequence to view a help topic\nthat explains the parameters that must be selected or completed. Also, there\nare numerous tool tips within each dialog (displayed by hovering the mouse\ncursor over an item of interest).\n\nThis KB article has some helpful information as well:\n\nHow do I manually configure an SPC record using the Aspen InfoPlus.21 Administrator? https://esupport.aspentech.com/S_Article?id=000095596\n\nThe QConfigUtility.exe guides the user in setting up Q records defined by Q_XBarCDef, Q_XBarCSDef, Q_XBar21Def, and Q_XBarS21Def.\" Records defined by Q_XBarDef can only be configured manually; you are unable to use the SPC Configuration Wizard to modify that record. \n\nThose record definitions above correspond to these SPC variable types:\n\nXbar/R = Q_XBarCDef \nXbar/s = Q_XBarCSDef \nXbar/R Enhanced = Q_XBar21Def\nXbar/s Enhanced = Q_XBarS21Def\n\nWith that information, you cannot use the SPC configuration wizard to create or modify SPC records belonging to a custom definition, only the record definitions above.\n\nDespite not being able to use the wizard for Q_XBarDef records, you could duplicate child records of Q_XBarDef. If you had one that was configured in a way that you want you could duplicate it and retain the settings in the new record (as a time saver compared to making a record again from scratch).", "keywords": "SPC records", "reference": null}, {"id": "000101918", "problem_statement": "How do you model Reversible addition-fragmentation transfer (RAFT) polymerization?", "solution": "This example demonstrates how to apply a user kinetics subroutine together with the standard free-radical kinetics model to simulate RAFT polymerization. The file can be opened in Aspen Plus V14 and higher.\n\nTo use the model, unzip all of the files in RAFT polymerization.zip into a clean directory and open the .bkp file.\n\nFor details see RAFT Polymerization.docx.\n RAFT Polymerization Kinetics\nReversible addition-fragmentation chain transfer (RAFT) polymerization involves thiocarbonylthio compounds or other \u201cRAFT\u201d agents to control the molecular weight and polydispersity of a free-radical polymer. Unlike traditional chain transfer agents, the activation and deactivation of the RAFT site is reversible, making RAFT a type of \u201cliving\u201d polymerization.\n\n\n\nThe intermediate species are relatively short lived. We have chosen to ignore the intermediates to keep the model simple. The RAFT fragment is represented as a segment in the model to keep track of the additional mass added to the polymer chain.\nThe table below outlines the key reactions using the standard notation employed in the Aspen Polymers documentation; \u201cP\u201d refers to live polymer, \u201cD\u201d refers to dead polymer. We introduce a new term \u201cQ\u201d to represent the dormant polymer species (polymer molecules with a terminal RAFT fragment).\n\nRAFT Free-Radical Reaction Scheme\nReaction Type Reaction Equation\nInitiator Decomposition I \u00e0 2fR*\nChain Initiation R* + Mi \u00e0 P1,i\nPropagation Pn,i + Mj \u00e0 Pn+1,j\nChain Transfer to Agent or solvent Pn,i + A \u00e0 Dn + R*\nChain Transfer to Monomer Pn,i + Mj \u00e0 Dn + P1,j\nRAFT pre-equilibrium.\n(Chain transfer to RAFT agent) Pn,i + Fk \u00df\u00e0 Qn,i,Fk + R*\nRAFT main equilibrium.\n(Chain transfer to dormant polymer) Pn,i + Qm,j,Fk \u00df\u00e0 Qn,i,Fk + Pm,j\nTermination (combination) Pn,i + Pm,j \u00e0 Dn+m\nTermination (disproportionation) Pn,i + Pm,j \u00e0 Dn + Dm\nNotation\nI Initiator\nf Initiator efficiency\nR* Primary radical\nMi Monomer of type \u201ci\u201d\nA Chain transfer agent (or solvent susceptible to chain transfer)\nPn,i Live polymer of length \u201cn\u201d with live terminal segment of type \u201ci\u201d\nDn Dead polymer of length \u201cn\u201d\nFk RAFT agent of type \u201ck\u201d\nQn,I,Fk Dormant polymer of length \u201cn\u201d with penultimate segment \u201ci\u201d and RAFT fragment \u201ck\u201d\n\nThe standard Free-Radical model includes all reactions shown except the RAFT pre-equilibrium and main equilibrium stages. These must be addressed through a user kinetics routine. The user routine needs to calculate the rates of change of components (monomer, RAFT agent, polymer) as well as the rates of change of component attributes (moments, segment flows, live end flows) on kmol/sec basis.\n Implementation in Aspen Plus Model\nThe \u2018proof-of-concept\u2019 RAFT model is based on the polyacrylic acid example model delivered with Aspen Plus. Trimethylene-trithiocarbonate is used as a representative RAFT agent \u201cRAFT\u201d. The corresponding end segment \u201cE-RAFT\u201d is specified using Van-Krevelen groups. An additional component \u201cDUMMY\u201d is included in the model and defined as a chain transfer agent in the free-radical kinetics. This allows for evaluation of the influence of conventional chain transfer agents compared to RAFT agents.\n\nSeveral of the properties of the RAFT agent are estimated from structure. Given the size and molecular weight of typical RAFT agents, these species are expected to be non-volatile.\nThe model requires tracking of the live second moment (LSMOM) and user attributes CAUSRA and CAUSRB. These are included together with the typical set of Free-Radical attributes.\n\nThe reaction kinetics are calculated using the standard Free-Radical polymerization model \u201cR-1\u201d together with a user kinetics model \u201cRAFT-RXN\u201d. The rate constants for the radical kinetics are shown below (note: these rate constants are for testing/demonstration purposes only).\n\n Model Test Results\nThe \u2018proof-of-concept\u2019 RAFT model is based on the polyacrylic acid example model delivered with Aspen Plus. Trimethylene-trithiocarbonate is used as a representative RAFT agent \u201cRAFT\u201d. The corresponding end segment \u201cE-RAFT\u201d is specified using Van-Krevelen groups. An additional component \u201cDUMMY\u201d is included in the model and defined as a chain transfer agent in the free-radical kinetics. This allows for evaluation of the influence of conventional chain transfer agents compared to RAFT agents.\nThe table below shows the simulation results for three cases. The first case excludes all transfer agents. The second case uses a RAFT agent. The third case uses an equivalent amount of conventional chain transfer agent. These results demonstrate that the RAFT agent effectively lowers polydispersity (PDI) compared to conventional chain transfer agents.\nInitial Moles No Agents RAFT Agent CTA Low RAFT Low CTA\nMonomer 1.0 1.0 1.0 1.0 1.0\nInitiator 0.001 0.001 0.001 0.001 0.001\nRAFT Agent 0.0 0.004 0.0 0.001 0.0\nChain Transfer Agent 0.0 0.0 0.004 0.0 0.001\nDPN 498 167 166 332 332\nPDI 2.90 1.44 4.18 2.00 2.75", "keywords": null, "reference": null}, {"id": "000101915", "problem_statement": "**Error\n RPLUG EXITED BECAUSE INTEGRATION FAILED. INDEX = (-1)\n PROBABLE CAUSE IS INCORRECT KINETICS. CHECK RATE-CON\n PARAMETERS AND MOLAR VOLUME CALCULATIONS.", "solution": "When we model a rate-based reactor; RPLUG using the kinetic reactions involved in it, we might encounter with mass and kinetic energy balance error.\n\nOne way to troubleshoot the issue as proposed in KB Article:\nhttps://esupport.aspentech.com/S_Article?id=000085925\n\nOne more approach used as a workaround in the mass imbalance error is to check the kinetics of the reactions added. If the reacting phase is all vapor in the simulation, consider changing the rate basis to reactor volume \"Reac (vol)\" for all the kinetic reactions in the reaction set.\n \nThe rate is in kmol/s-(basis) where the basis is m3 for Rate Basis Reac (vol) and kg catalyst for Rate Basis Cat (wt). The reactor volume or catalyst weight are determined by specifications in the reactor where the reaction is used.", "keywords": "Kinetic imbalance, Mass, Rate, Reactor Volume, RPLUG", "reference": "https://esupport.aspentech.com/S_Article?id=000085925"}, {"id": "000079798", "problem_statement": "Upon installation, users encounter difficulties starting the 'Aspentech Production Control RTE Service' and 'Aspen APC Web Provider Data Service'. Notably, the Event Viewer and RepositoryErrors.log file (from C:\\ProgramData\\AspenTech\\APC\\Performance Monitor) may display error messages such as:\n\n\"Failed to start WebDataProvider service.... AspenTech.ACP.Services.Security.AfwsWrapper.CreateDataCollection...\"\n\n \n Furthermore, users may face challenges like:\n Inability to create a dummy role in the AFW Security Manager\nFailure to locate default APC roles like ACOWebAdmin and ACOWebEngineer\nInability to initiate Configure Online Server", "solution": "Grant appropriate write and modify permissions to the C:\\Program Files (x86)\\Aspentech\\Local Security\\Access97 folder. This ensures the AFW service can write and modify the AFWDB.mdb file. Note the creation of the AFWDB.ldb file, which disappears when the .mdb file is unused due to Microsoft Access behavior.\n\nUse the Internet Information Services (IIS) Manager to navigate to \\Sites\\Default Web Site\\AspenTech\\Afw and identify the logon configuration. Example:\n\n\nIn the above example, Anonymous logon is used with the account IUSR. Therefore, grant the necessary modify and write folder permissions to C:\\Program Files (x86)\\AspenTech\\Local Security\\Access97.\n\nRestart the services 'World Wide Web Publishing' and 'AFW Security Client Service'. Open the AFW Security Manager and verify that you can create a new dummy role. Then start the services 'Aspentech Production Control RTE Service' and 'Aspen APC Web Provider Data Service'.\n\nIf this approach doesn\u2019t work, it may also be possible that both versions of AFW, x32 and x64, are registered in the machine. To fix this, please refer to KB 000093906.", "keywords": "PCWS, Aspen Watch, Web Provider Data Service, Aspen Local Security, AFWDB", "reference": null}, {"id": "000069715", "problem_statement": "It is possible to provide a list of batch reports for the various batch reporting tools that is relevant to the particular batch area you have selected. How is this achieved?", "solution": "Report templates can be separately configured for the different batch areas. Moreover, the default folder where the reporting tool will look for the templates can be specified for each area in the Aspen Production Record Manager Administrator under:\nAreas | | Reporting\nRight click \"Reporting\" node in the tree to access the context menu, and select Properties to open Reporting Properties dialog. \nSimply click the appropriate \"...\" button to the right of the window to configure the following paths:\nDefault Template Directory (C:\\Program Files\\AspenTech\\Batch.21\\Data\\Reports\\Templates)\nDefault Output Directory (C:\\ProgramData\\AspenTech\\Production Record Manager\\Data\\Reports\\Output)\nDefault Query Directory (C:\\ProgramData\\AspenTech\\Production Record Manager\\Data\\Reports\\Queries)\n\nIf the folder specified is either empty or does not exist, the following error messages may be observed:\nRun-time error '380':\nInvalid property value\nRun-time error '-2147220990 (80040202)':\nB21BSC-50650: The report template folder does not exist.", "keywords": "Run a report\nBatch reporting query output\nAspen Production Record Manager VB Cont\n121689-2", "reference": null}, {"id": "000101829", "problem_statement": "\u2013 Users are experiencing an issue where APED databases for Aspen Properties are not properly being restored every time they log into the machine. This issue commonly appears when a lab environment is being used with non-persistent user profiles. In Aspen Plus, the GUI will show error: \n\n\"Failed to initialize Aspen Properties Enterprise Database. We were not able to load any of databases. Please Use Database Tester or Database Manager to diagnose the problem. You may need to register or restore databases. Aspen Plus must be closed.\" \n\nRoot Cause \u2013 With a lab environment, user profiles are being deleted, causing SQLLocalDB to not interface correctly with the correct user profile. When the user\u2019s profile is deleted, the user\u2019s APED user folder (located in Program Data\\AspenTech\\APED VX.XX) is not deleted.\n\n \n Typically, when a user logs into a machine a second time with the same account, APED recognizes that the user has databases that are restored. Because since the user profile is deleted, it is treated as a brand-new user and restoration of the APED databases fails. This means that the databases shown in the user\u2019s APED folder are now corrupted because the original SQLLocalDB instance was also deleted when the user profile was removed.", "solution": "\u2013 Use the DeleteDBInstance.bat script.\n\nTo continue using SQLLocalDB, AspenTech provides a script named DeleteDBInstance.bat in the APED folder (C:\\ProgramData\\AspenTech\\APED VXX.X\\DeleteDBInstance.bat).\n\n This script will remove the remaining user folder in the APED folder and clear the previous configuration of the database restoral from the last session.\n \n \n After running the script, a new user can now launch Aspen Plus and the APED database restore process will launch, allowing the completion of restoring the APED databases, which then allows use of Aspen Plus. \n\nYou can activate this script automatically, either when the user logs in or logs off, to complete this action. In the example below, we will be using a Group Policy Logoff script to initiate this script when the user logs off the machine. \n\n1. Navigate to the Group Policy Editor, then expand the Windows Settings under User Configuration, and then select Scripts (Logon/Logoff).\n \n 2. Select which option best suits your environment. We would recommend the Logoff option to avoid long start up times when the user logs in. After selecting the wanted option, click Add.\n \n 3. Select Browse, and navigate to the DeleteDBInstance.bat path, click Ok and finally click Apply to save your changes. C:\\ProgramData\\AspenTech\\APED VXX.X\\DeleteDBInstance.bat\n\n Note \u2013 If this solution is used, the database restoral process will occur each time a user logs in. To avoid this, Microsoft SQL Server must be implemented, either locally or remotely. Please reference the two knowledge base articles below for instructions on how to avoid this issue by integrating Microsoft SQL Server: \n\nHow to migrate Aspen Properties Enterprise Database from SQL Express to a Central SQL Server\n\nHow to setup a centralized SQL server to host a remote APED database and connect to the hosted SQL server?", "keywords": null, "reference": null}, {"id": "000056088", "problem_statement": "How to specify catalyst loading in a RPlug reactor?", "solution": "On the RPlug | Setup | Catalyst sheet, you can specify two of the three parameters about the catalyst, which are Catalyst loading, Bed voidage and Particle Density. The selected Parameters have different influence on species generation rates depend on the Rate basis for the reaction that occurs in this reactor.\nIf you have chosen Reac (vol) as the Rate basis on the Reaction | Input | Kinetic sheet, only Bed voidage will affect the species generation rates.\nSpecies Generation rate = Reaction rate \u00d7 total volume \u00d7 bed voidage\nThe total volume is determined by Reactor dimensions which are specified on the RPlug | Setup | Configuration sheet.\nIn the attached example file Simulation1.bkp, a dummy reaction with constant reaction rate was defined to verify this behavior. \n\n \nIf you have chosen Cat (wt) as the Rate basis, the following formulas are used to calculate the species generation rates.\nSpecies Generation rate = Reaction rate \u00d7 Catalyst Loading\n\nOr\n\nSpecies Generation rate = Reaction rate \u00d7 total volume \u00d7 (1- bed voidage) \u00d7 Particle Density\nThese 2 formulas are verified in the attached Simulation2.bkp and Simulation3.bkp.\n\n\n\n\n\nAccording to the last formula, you cannot specify bed voidage and Particle density when the reactor is not full of catalyst. Otherwise you will get a wrong result. In this case, it is better to specify catalyst loading in the reactor.\n Key words\nRPlug Catalyst loading Bed voidage Particle density Aspen Plus", "keywords": null, "reference": null}, {"id": "000079318", "problem_statement": "OPC DA Clients can not connect to the Aspen.Infoplus21_DA server because it does not appear as a registered OPC DA server.", "solution": "If an OPC DA client can not identify that the Aspen.Infoplus21_DA server is available for connection the likely cause is that it is not registered properly. There are two components that must be registered for the Aspen.Infoplus21_DA server to be available. [ IP21DAServer.dll, IP21DA_Server.exe] \nTo correct this issue follow the steps below\n\u00b7 Open a command window with Administrator privileges, \"run as administrator\"\n\u00b7 Execute the following commands to unregister and reregister the Aspen.Infoplus21_DA server.\n Note: The actual file paths may very depending on the install location and if it's 32bit or 64bit installation, the solution shown below assumes 64-bit InfoPlus.21 in the default folder (if 32-bit, change folder name to \\Program Files (x86)\\).\n\u00b7 Unregister the Aspen.Infoplus21_DA server components:\n\"C:\\Program Files\\AspenTech\\InfoPlus.21\\db21\\code\\Opc.ConfigTool.exe\" -ua \"C:\\Program Files\\AspenTech\\InfoPlus.21\\db21\\code\\IP21DAServer.dll\"\n\n\"C:\\Program Files\\AspenTech\\InfoPlus.21\\db21\\code\\IP21DA_Server.exe\" /unregserver\n \u00b7 Reregister the Aspen.Infoplus21_DA server components:\n \"C:\\Program Files\\AspenTech\\InfoPlus.21\\db21\\code\\Opc.ConfigTool.exe\" -ra \"C:\\Program Files\\AspenTech\\InfoPlus.21\\db21\\code\\IP21DAServer.dll\"\n\n\"C:\\Program Files\\AspenTech\\InfoPlus.21\\db21\\code\\IP21DA_Server.exe\" /regserver", "keywords": "Error code 0x80040154\nREGDB_E_CLASSNOTREG", "reference": null}, {"id": "000071792", "problem_statement": "As part of the Desktop installation of the AspenTech Manufacturing products, users receive the SQLplus Query Writer tool. It can be used to write and execute SQL queries against the Aspen InfoPlus.21 database. The connection between the Query Writer and the IP.21 database is accomplished through TSK_SQL_SERVER, which is one of the tasks listed in the Aspen InfoPlus.21 Manager utility. The default settings for TSK_SQL_SERVER allow both read and write access. How can connections via the Query Writer be forced into read only mode (that is - they cannot make changes to the InfoPlus.21 database)?", "solution": "It is possible to create an additional TSK_SQL_SERVER, with this new one allowing read only access. Users would then connect to the database using this connection.\nSteps to implement:\n\n1. In the InfoPlus.21 Manager, double-click on TSK_SQL_SERVER in the \"Defined Tasks\" box. The task detail information will be displayed on the right side of the Manager window.\n\n2. In the \"Task Name\" field, edit TSK_SQL_SERVER to be a new name, for instance TSK_SQL_SERVER_RO .\n\n3. In the \"Command Line Parameters\" field, edit the existing number (probably 10014) to be a new number, like 10015. This is a port used for communication.\n\n4. Add an additional parameter of r, for Read Only. The Command line parameters would now look like this: 10015 r\n\n5. Click on the Add button to add this to the list of Defined Tasks. It will appear at the bottom of the list in the upper left corner of the Manager. There's no need to adjust the order.\n\n6. Edit the \"Services\" file, in the C:\\windows\\system32\\drivers\\etc directory to add this new network service (details follow). If using Windows Notepad it may need to be started using 'Run as Administrator'. The new line which should be entered would look like this:\n\n sqlplus_ro 10015/tcp\nSqlplus_ro would be the service name and 10015/tcp would be the port and protocol.\n\n\n7. Now the new TSK_SQL_SERVER_RO can be started. Return to InfoPlus.21 Manager, select this new task, and click on the Run Task button. Verify that it appears in the list of Running Tasks.\n\n8. On each machine where there is a need to connect as read only, from the Query Writer tool, select Query -> Host and fill in the name of the server, and the new service number, 10015.\nThis user will now only be able to execute queries which perform database reads, like Select statements. If an Update statement is attempted, they will receive an error message like:\n \"Database write not allowed in read only mode\" (a line number may also be listed)\n\nUsers that require full read/write permissions would continue to connect to the original service, 10014.\nThis is documented in the SQLplus Help file in the section titled \"Defining a network service\".\n Tips:\n\n --Case and spaces are important for the command line parameter. It should read \"10015 r\", NOT \"10115r\" or \"10015 R\"\n\n --If the majority of users will require read only access, you may want to modify the default TSK_SQL_SERVER to be the read only version, and create the new one to be the read/write version. Since client installations will try to use the default one, fewer users would have to be modified.\n\n --Other command line parameters may be used with TSK_SQL_SERVER. Please see the solution listed below for additional details:\n What parameters can be specified for TSK_SQL_SERVER?\n ( https://esupport.aspentech.com/S_Article?id=000017531 )", "keywords": "103115\n103115-2", "reference": null}, {"id": "000082229", "problem_statement": "When a new user attempts to log in to Aspen Production Execution Manager (\"APEM\") for the first time, the following error occurs:\n Duplicate user (same domain\\user with different SID)\n The typical scenario when this happens is that an employee has left a company, then is hired again. However during their absence their Windows Active Directory account was deleted (not just disabled.) When they are hired again, very likely the same domain\\user ID will be used. However this new domain\\user, while appearing identical, will have its own unique SID.\nFrom the APEM perspective, the SID is the fundamental identifier, so that the domain\\user info is reliable for audit trail purposes. So if one user named Jack Ryan quits, and his account corp\\RyanJ is deleted, then six months later another Jack Ryan is hired and given id corp\\RyanJ, this error will occur the first time he logs into APEM. The corrective action that should be taken is to change his Windows ID (for example corp\\RyanJ2.) This way all the audit trail records concerning corp\\RyanJ will be related to the first employee, and all new activity of the new Jack Ryan will be under corp\\RyanJ2.\nHowever, as is sometimes the case, what if the new hire is the same Jack Ryan? In that case it would be desirable to continue his own history in the APEM system. Read the Workaround section below for more information.", "solution": "The only two workarounds are:\n1. Manually edit the EBR_USER table, changing the SID column for the user to NULL. However this sort of manual changes to data in an EBR_ table is prohibited by the system and will result in an error which is further described here: How do I prevent the \"E4147\" error in MOC or MOC/Config debug files? (ie. sign.cmd EBR_USER)\n2. Assign a new Windows Active Directory account name to the user, as discussed above.\n KeyWords\nDataModel.chkDMUser.EBR_USER_59.Invalid record signature", "keywords": null, "reference": null}, {"id": "000065638", "problem_statement": "How to Configure an LDAPS Connection for aspenONE Process Explorer Search", "solution": "1.- Use the aspenONE Credentials tool to configure the port being used for LDAPS (Secure LDAP) at your site. Port 636 is commonly used as shown in the example below. The other parameters are the same as used with LDAP.\n2.- Click the Windows Key on your server and enter Manage Computer Certificate into the search input field to identify an application for exporting the certificate for the LDAPS connection. The root certificate (which will include your company domain name) can be found in the Trusted Root Certification Authorities | Certificates folder (ask your I.T. department if the appropriate SSL certificate has not yet been added to your server's certificate store).\n3.- Export the certificate using the *.CER file type.\n4.- Import the certificate to the java keystore by setting -keystore to the Java file path and the JRE instance used by Tomcat: \ne.g.\nkeytool -import -trustcacerts\n-keystore \"C:\\Program Files\\AdoptOpenJDK\\jdk-n.n.n.n-hotspot\\lib\\security\\cacerts\"\n-storepass -alias root -file \"C:\\\\rootcert.cer\"\nIf you copy and paste the example note the following changes must be made:\ncorrect the path that specifies the \\security\\certs folder. The Tomcat8w.exe application in the Tomcat bin folder will show the version of Java (JRE) being used.\nchange - The default for is: changeit\ncorrect the path and filename that specifies the file you have just exported in part 3.\n5.- Run Tomcat8w and add this line to java options:\n-Dcom.sun.jndi.ldap.object.disableEndpointIdentification=true\n6.- Restart Tomcat to activate the LDAPS connection.\n7.- A normal Running and Configured Search Engine Status should be reported on the All tab within the aspenONE Process Explorer Administrator, e.g.,\n\n\n ~~~ V12 and Newer ~~~\n\nSteps 1 - 3 are the same - please see above.\n Step 4: This step is the same as the pre-V12 step 4, except the path to the \\security\\certs directory should be that of Solr\u2019s jvm which may be different.\nThis path can be determined by looking at the Solr config for Java Properties \"Java.home\" setting. To see this setting:\n Navigate to http://localhost:8080/AspenCoreSearch/\nSelect \"Solr Admin\" at top of window.\nSelect \"Java Properties\" on left of Solr Admin screen. Example:\n\n\n\nStep 5: This VM argument:\n\n -Dcom.sun.jndi.ldap.object.disableEndpointIdentification=true\n\nshould NOT be added to the Tomcat settings (like in the Pre-V12 section), but instead now should be included in the JavaParams in this file:\n\nC:\\Program Files\\AspenTech\\aspenONE\\SolrService\\aspenONESolrWindowsService.exe.config\n\nExample: \n\nStep 6: Stop the following services in this order:\nSolrWindowsService\nApache Tomcat \n\n\nThen restart them in this order:\nApache Tomcat\nWait until the AspenSchedulerStartUp.log indicates it has been started without error and is logging. (found in C:\\Program Files\\Common Files\\AspenTech Shared\\Tomcat9.0.27\\logs)\nSolrWindowsService", "keywords": "SSL\nProtocol\nSecure A1PE\nSOLR\nLDAPS", "reference": null}, {"id": "000073785", "problem_statement": "When users upgrade to a new Aspen Custom Modeler (ACM) version but try using the old Microsoft Excel VBA program, to launch ACM, with the following command:\nSet ACMObject = CreateObject(\"ACM Application XX00\")\nThis command may not work with the new version as it did with old version.", "solution": "This number: \u2018XX00' is related to the version number linked to the different ACM version flavors. For example, for V8.6 this number should be \u20183200\u2019, for V8.8 \u20183400\u2019, and so forth. Therefore, replacing the number in the Excel VBA command for the right version number will solve the issue.\n As a reference, find below a list of different Aspen product version and flavor number:\n Aspen Product Version\nFlavor\nFor VBA applications\nV14.0\n40\n4000\nV12.1\n39\n3900\nV12.0\n38\n3800\nV11.0\n37\n3700\nV10.0\n36\n3600\nV9.0\n35\n3500\nV8.8\n34\n3400\nV8.6\n32\n3200", "keywords": "Set ACMObject, VBA, Command.", "reference": null}, {"id": "000101883", "problem_statement": "How to access HYSYS BackDoor Variables with Python?", "solution": "Python supports the creation of BackDoor objects, so certain tasks that require user to create and access BackDoor objects can be accomplished in Python.\n\nThe main requirement is to install the Pywin32 package and follow the corresponding Python environment instructions to do it.\n\nThe BackDoor variable is an object type that could be created for the majority of the Unit Operations, the streams, and even some objects from basis environment (fluid packages and component lists as e.g.) in HYSYS.\n\nThe following functionalities are available with backdoors:\nRead a single value or more.\nSet one or more values.\nDeliver messages to the objects.\nExecute and carry out attachment procedures.\nIn the following Python example, the code will stop the solver and allow the backdoor variable to be read and input new value for variable P Cond in the column.\n\n\n\nAfter executing the Python code, the value (P cond) is modified.\n\n\n\nThe Python code can be reviewed as follows:\nimport win32com.client\nimport os\nfrom win32com.client import CastTo\n\nhyApp = win32com.client.Dispatch('HYSYS.Application')\nhyApp.Visible = True\n\nhyCase = hyApp.SimulationCases.Open(\"C:\\Program Files\\AspenTech\\Aspen HYSYS V14.0\\Samples\\Sweet Gas Refrigeration Plant.hsc\") #full path to the hsc file\nhyCase.Activate()\n\nhycol = hyCase.Flowsheet.Operations.Item('DePropanizer')\nhycolfs = hycol.ColumnFlowsheet\nhycolfs.Run()\nbdcolfs = CastTo(hycolfs, 'BackDoor')\nbdcolfs.SendBackDoorMessage(\":Stop\")\n\npCondVariable=bdcolfs.BackDoorVariable(\":Pressure.500.0.0\").Variable\npcondeValue=pCondVariable.GetValue()\n\npCondVariable.SetValue(1400)\n\nAlso, the HYSYS case for this example is available in the default installation path: C:\\Program Files\\AspenTech\\Aspen HYSYS V14.0\\Samples\\Sweet Gas Refrigeration Plant.hsc\n\nNote: the Python code is referring to a V14 example file. It can be modified to older version.\n\nKeyWords\nHYSYS, Automation, COM, Python, Backdoor", "keywords": null, "reference": null}, {"id": "000101360", "problem_statement": "This article provides instructions on how to perform a Tomcat upgrade on AspenONE Process Explorer V12 or V14 from Tomcat 9.0.27 or 9.0.56 to 9.0.78", "solution": "Note: These instructions are version specific to both the A1PE version, and the Tomcat versions listed above. \nTo upgrade V12 Tomcat from 9.0.27 to 9.0.56 see KB 000099556.\nUse this KB to upgrade A1PE V12 or V14 Tomcat version from 9.0.27 to 9.0.78 Or if you have already upgraded to Tomcat 9.0.56 using KB 000099556, you may use this to upgrade from Tomcat 9.0.56 to 9.0.78 \n(Note: if .war files other than AspenCoreSearch.war and solr.war exist under C:\\Program Files\\Common Files\\AspenTech Shared\\Tomcat9.0.27\\webapps\\ or C:\\Program Files\\Common Files\\AspenTech Shared\\Tomcat9.0.56\\webapps\\, Please contact Support and do not continue with these upgrade steps since these instructions are only applicable to AspenCoreSearch and solr deployments.) \n \nUpgrading of Tomcat should only take place on a system with a correctly functioning Aspen Search. To confirm functionality, execute verifications 2 \u2013 5 from the Post-installation Verification section before starting this update. \n\nIf A1PE and APEM are both installed on the same box, please contact Aspentech \u2013 MES Applications team before doing the tomcat upgrade. These instructions are not applicable for APEM. \nNote that this Tomcat install for 9.0.78 is a full Tomcat installation. It does not update the previous installation \u201cin situ\u201d \u2013 instead it creates an entirely new one and registers it as a Windows Service. These installation instructions below identify steps to prepare for the upgrade, steps for executing the Tomcat upgrade, and required post install steps. (Making a checkpoint/backup at this time is recommended) \n\nSteps: \n\nPreparation steps \nOpen Windows Services to note the installed location of Tomcat: \nFind the service starting with Apache Tomcat. \nNote the \u201cLog On As\u201d account of the current tomcat configuration. \nNote the path to the Tomcat base folder in Properties in the General tab in\u202fPath to executable. It should be the following: C:\\Program Files\\Common Files\\AspenTech Shared\\Tomcat9.0.27. That folder location will be referred to as in the first part of this document. \nOpen the Tomcat9w application in \\bin\\ , and make note of the java JRE version. You will need this when installing Tomcat.\nIf you had previously made changes to Tomcat settings, we recommend you use the Tomcat9w application to make screen-shots or copies of existing settings so they can be reapplied later if required. \nShutdown the Apache Tomcat service and SolrWindowsService in the windows Services app. \nOpen a cmd window as Administrator and do: \nRun \u201ciisreset /STOP and wait for it to stop. \nFrom the cmd window as Administrator do: \nNavigate to C:\\Program Files\\Common Files\\AspenTech Shared\\Tomcat9.0.27\\bin\\ \nRun \u201cservice.bat uninstall\u201d \nNavigate to C:\\Program Files\\Common Files\\AspenTech Shared\\ \nBefore installing the new Tomcat version, copy the following files and folders to a safe location: \nCopy the entire directory: \\conf \nCopy the entire directory \\appdata\\scheduler \nCopy the AspenCoreSearch.war and solr.war files found in \\webapps\\. \nClose the Windows Services window. \nBe sure there are no cmd windows or Windows File Explorer, or any other programs accessing the Tomcat directory. Then rename the Tomcat directory to a name that does not include \u201cTomcat\u201d. (e.g. change name to \u201cXomcat9.0.27\u201d) \nNow you are ready to install Tomcat 9.0.78. \nInstalling Tomcat 9.0.78 \nDownload the 32-bit/64-bit Windows Service Installer for 9.0.78. \nGo to https://archive.apache.org/dist/tomcat/tomcat-9/v9.0.78/bin/ \nDownload the installation file: apache-tomcat-9.0.78.exe \nRetrieve the corresponding sha512 value (apache-tomcat-9.0.78.exe.asc or apache-tomcat-9.0.78.exe.sha512) and confirm the integrity of the downloaded exe file. Steps to confirm integrity: \nOpen a Windows Powershell window. \nNavigate to location where apache-tomcat-9.0.78.exe was downloaded.\nRun the following command: certutil -hashfile .\\apache-tomcat-9.0.78.exe sha512 \nConfirm the hash value returned is the same as shown by the corresponding apache-tomcat-9.0.78.exe.sha512. \n Run the installer (apache-tomcat-9.0.78.exe) as an administrator. \n\n \n Only check the box boxes shown as checked below. Do not check other boxes: \n \n Keep the Windows Service Name of: Tomcat9 \n \n Click \u201cnext\u201d. The installer should find the JRE path for you, but you should double-check to ensure it is the expected version matching that used by the earlier tomcat (as noted in earlier section). Only include the part of the path up to the \u201cbin\u201d directory \u2013 as the example below. Correct it if necessary. (note: Java 11 is a requirement.) \n\n Update the Tomcat install path to C:\\Program Files\\Common Files\\AspenTech Shared\\Tomcat9.0.78. (Important: Do not use the default Apache Software Foundation path). \n\n \n Click \u201cInstall\u201d button and wait for the installation to complete:. \n \n Uncheck the two boxes: \n \n Click the \u201cFinish\u201d button.\n Open the new instance of Windows Services, and check the Apache Tomcat 9.0 Tomcat9 service configuration. Open the service Properties: \nVerify the \u201cStartup type\u201d is \u201cAutomatic\u201d (do not use the Delayed Start option). \nCheck the \u201cLog On As\u201d of the new tomcat. Verify or update as needed to correspond to the earlier Tomcat \u201cLog On As\u201d account noted earlier. \n Now you are ready to re-apply the earlier configurations. Do not start the Tomcat service yet. \n\nReapply earlier configuration to new Tomcat: \n\nIn the remainder of this document, this folder location: \u201cC:\\Program Files\\Common Files\\AspenTech Shared\\Tomcat9.0.78\u201d, will be referred to as .. \n Copy the saved AspenCoreSearch.war and solr.war into the new \\webapps\\ directory. \nDelete the \u201cROOT\u201d folder from the new \\webapps\\ directory. \nCopy the files in the new \\conf\\ directory into a temporary location (e.g. C:\\Temp\\TC9.0.87Config\\). This is because some changes from these new files may need to be merged into existing configuration files.\nReplace the directories and files beneath \\conf\\ directory and files with the ones saved earlier from the previous Tomcat (i.e. from 9.0.27 or 9.0.56), (except for tomcat-users.xsd, web.xml and Catalina.properties for which you should keep and use the new 9.0.78 version of those 3 files): \nAspenSearch.keystore \ncatalina.policy \ncontext.xml \njaspic-providers.xml \njaspic-providers.xsd \nlogging.properties \nserver.xml \ntomcat-users.xml \nCatalina\\localhost\\AspenCoreSearch.xml \nCatalina\\localhost\\solr.xml \nAdd the \\appdata\\scheduler\\ folder that was saved earlier from 9.0.27 or 9.0.56. \nDelete the \\appdata\\scheduler\\logs directory. (then they will be re-created again on startup) \nIf you are upgrading to Tomcat 9.0.27 directly to 9.0.78, then make the following edits to these files. Otherwise just confirm these changes are in place : \n\\conf\\catalina.policy : comment out the following line (i.e. put the double slash at the start of line: // ): \n\u201cpermission java.util.PropertyPermission \"org.apache.juli.AsyncLoggerPollInterval\", \"read\"; \n\\conf\\server.xml: search for the string specifying the AspenSearch.keystore, and if it exists, update the path to the new version: \nkeystoreFile=\"C:\\Program Files\\AspenTech\\Tomcat9.0.78\\conf\\AspenSearch.keystore\" \n\\conf\\server.xml: unless you have special requirements requiring AJP, then we recommend you disable AJP by doing the following for increased security. Comment it out using : \nComment out AJP connector. It is not used by search, and the AJP settings have changed. So comment out this string: \n \nSo it appears like this: \n \nIf you had made changes to Tomcat Tomcat9w settings in the previous version of Tomcat, now is a good time to check and if needed, reapply the same settings based on earlier screen-shots, using the Tomcat9w application in \\bin\\. \nStart ApatcheTomcat and SolrWindowsService from the windows Services. That will redeploy these two war files in the new Tomcat installation: \nsolr.war \nAspenCoreSearch.war \nWait until the AspenSchedulerStartUp.log is created in the /logs/ directory \nFrom the cmd window, run \u201ciisreset /START \nPost-installation Verification : \n\nThese steps are to confirm the Tomcat update was completed as expected, and search functionality is in place: \nNavigate to the directory C:\\Program Files\\Common Files\\AspenTech Shared\\Tomcat9.0.78\\logs and display the most recent of the Catalina*.log files. It should show the following Tomcat version Name, build date and version numbers: \nINFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version name: Apache Tomcat/9.0.78 \nINFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server built: Jul 4 2023 13:15:43 UTC \nINFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version number: 9.0.78.0 \n Open the AspenONE Process Explorer Administration tool (or navigate to http://localhost/processExplorer/webcontrols/PBitemdetails.asp?admin=true), and confirm no errors are displayed at the top of the \u201cAll\u201d tab, like so: \n In the Tags tab, confirm search data scanning functions as expected with the Tags tab by specifying scanning of a single known tag: \n \nAnd then click the \u201cScan\u201d button. Expect the results panel to display: \n100% of 1 tags completed \nScan completed with 1 tags published \nNavigate to Process Explorer home page (http://localhost/processexplorer/aspenONE.html#), and click on the magnifying glass: \n \nWait and verify searched data is displayed. (This may take a while displaying the spinner since some first-time initialization may occur.) \n \n Navigate again to the Process Explorer application (http://localhost/processexplorer/aspenONE.html#), and click on the Process Explorer icon: \n \nConfirm expected functionality when typing in a known tag into the search box. \n\n \n \n\n\nCVEs fixed in Tomcat when updating from Tomcat 9.0.27 to Tomcat 9.0.78:\nCVE # Impact level * Summary\nCVE-2023-34981 Important Information disclosure\nCVE-2023-28709 Moderate Apache Tomcat denial of service\nCVE-2023-28708 Important Apache Tomcat information disclosure\nCVE-2023-24998 Important Apache Tomcat denial of service\nCVE-2022-45143 Low Apache Tomcat JsonErrorReportValve injection\nCVE-2022-42252 Low Apache Tomcat request smuggling\nCVE-2022-34305 Low Apache Tomcat XSS in examples web application\nCVE-2022-29885 Low Apache Tomcat EncryptInterceptor DoS\nCVE-2021-43980 High Information Disclosure\nCVE-2022-23181 Low Local Privilege Escalation\nCVE-2021-42340 Important Denial of Service\nCVE-2021-33037 Important Request Smuggling\nCVE-2021-30640 Low Authentication weakness\nCVE-2021-30639 Important Denial of Service\nCVE-2021-41079 Important Denial of Service\nCVE-2024-21733 Important Information Disclosure\nCVE-2021-25329 Low Fix for CVE-2020-9484 was incomplete\nCVE-2021-25122 Important Request mix-up with h2c\nCVE-2021-24122 Important Information disclosure\nCVE-2020-17527 Moderate HTTP/2 request header mix-up\nCVE-2020-13943 Moderate HTTP/2 request mix-up\nCVE-2020-13935 Important WebSocket DoS\nCVE-2020-13934 Moderate HTTP/2 DoS\nCVE-2020-11996 Important HTTP/2 DoS\nCVE-2020-9484 Important Remote Code Execution via session persistence\nCVE-2020-1938 Important AJP Request Injection and potential Remote Code Execution\nCVE-2020-1935 Low HTTP Request Smuggling\nCVE-2019-17569 Low HTTP Request Smuggling\nCVE-2019-17563 Low Session fixation\nCVE-2019-12418 Moderate Local Privilege Escalation\n\n* Impact Level is defined by the Apache Tomcat Security Team. For level definitions, see: https://tomcat.apache.org/security-impact.html\n\n\n\n\n\nKey Words\n\nAspenONE Process Explorer\n\nTomcat 9.0.27\n\nTomcat 9.0.56\n\nTomcat 9.0.78\n\nUpgrade", "keywords": null, "reference": null}, {"id": "000101878", "problem_statement": "This article provides instructions on how to perform a Tomcat upgrade on AspenONE Process Explorer V12 or V14 from Tomcat 9.0.27 or 9.0.56 or 9.0.78 to 9.0.85.", "solution": "Note: These instructions are version specific to both the A1PE version, and the Tomcat versions listed above. \nTo upgrade V12 Tomcat from 9.0.27 to 9.0.56 see KB 000099556.\nTo upgrade Tomcat from 9.0.27 or 9.0.56 to 9.0.78 see KB 000101360.\nUse this KB to upgrade Tomcat version from 9.0.27 to 9.0.85 Or if you have already upgraded to Tomcat 9.0.56 or 9.0.78 using KB 000099556 or KB 000101360, you may use this to upgrade from that Tomcat to Tomcat 9.0.85 \n(Note: if .war files other than AspenCoreSearch.war and solr.war exist under C:\\Program Files\\Common Files\\AspenTech Shared\\Tomcat9.0.27\\webapps\\ or C:\\Program Files\\Common Files\\AspenTech Shared\\Tomcat9.0.56\\webapps\\ or C:\\Program Files\\Common Files\\AspenTech Shared\\Tomcat9.0.56\\webapps\\, Please contact Support and do not continue with these upgrade steps since these instructions are only applicable to AspenCoreSearch and solr deployments.) \n \nUpgrading of Tomcat should only take place on a system with a correctly functioning Aspen Search. To confirm functionality, execute verifications 2 \u2013 5 from the Post-installation Verification section before starting this update. \n\nIf A1PE and APEM are both installed on the same box, please contact Aspentech \u2013 MES Applications team before doing the tomcat upgrade. These instructions are not applicable for APEM. \nNote that this Tomcat installation for 9.0.85 is a full Tomcat installation. It does not update the previous installation \u201cin situ\u201d \u2013 instead it creates an entirely new one and registers it as a Windows Service. These installation instructions below identify steps to prepare for the upgrade, steps for executing the Tomcat upgrade, and required post install steps. (Making a checkpoint/backup before updating is recommended) \n\nSteps: \n\nPreparation steps \nOpen Windows Services to note the installed location of Tomcat: \nFind the service starting with Apache Tomcat. \nNote the \u201cLog On As\u201d account of the current tomcat configuration. \nNote the path to the Tomcat base folder in Properties in the General tab in\u202fPath to executable. It should be the following: C:\\Program Files\\Common Files\\AspenTech Shared\\Tomcat9.0.27 (or .56 or .78). That folder location will be referred to as in the first part of this document. \nOpen the Tomcat9w application in \\bin\\ , and make note of the Java Virtual Machine path shown in the Java tab. You will need this when installing Tomcat.\nIf you had previously made changes to Tomcat settings, we recommend you use the Tomcat9w application to make screen-shots or copies of existing settings so they can be reapplied later if required. \nShutdown the Apache Tomcat service and SolrWindowsService in the windows Services app. \nOpen a cmd window as Administrator and do: \nRun \u201ciisreset /STOP\" and wait for it to stop. \nFrom the cmd window as Administrator do: \nNavigate to the \\bin\\ folder.\nRun \u201cservice.bat uninstall\u201d \nNavigate to C:\\Program Files\\Common Files\\AspenTech Shared\\ \nBefore installing the new Tomcat version, copy the following files and folders to a safe location: \nCopy the entire directory: \\conf \nCopy the entire directory \\appdata\nCopy the AspenCoreSearch.war and solr.war files found in \\webapps\\. \nClose the Windows Services window. \nBe sure there are no cmd windows or Windows File Explorer, or any other programs accessing the Tomcat directory. Then rename the Tomcat directory to a name that does not include \u201cTomcat\u201d. (e.g. change name to have an \u201cX\u201c instead of \u201cT\u201d in the name \u2013 such as Xomcat9.0.27\u201d) \nNow you are ready to install Tomcat 9.0.85. \nInstalling Tomcat 9.0.85 \nDownload the 32-bit/64-bit Windows Service Installer for 9.0.85. \nGo to https://archive.apache.org/dist/tomcat/tomcat-9/v9.0.85/bin/\nDownload the installation file: apache-tomcat-9.0.85.exe \nRetrieve the corresponding sha512 value (apache-tomcat-9.0.85.exe.asc or apache-tomcat-9.0.85.exe.sha512) and confirm the integrity of the downloaded exe file. Steps to confirm integrity: \nOpen a Windows Powershell window. \nNavigate to location where apache-tomcat-9.0.85.exe was downloaded.\nRun the following command: certutil -hashfile .\\apache-tomcat-9.0.85.exe sha512 \nConfirm the hash value returned is the same as shown by the corresponding apache-tomcat-9.0.85.exe.sha512. \n Run the installer (apache-tomcat-9.0.85.exe) as an administrator. \n\n \n Only check the box boxes shown as checked below. Do not check other boxes: \n \n Keep the Windows Service Name of: Tomcat9 \n \n Click \u201cnext\u201d. The installer should find the JRE path for you, but you should double-check to ensure it is the expected version matching that used by the earlier tomcat (see the \u201cJava Virtual Machine path\u201d as noted in the earlier Preparation Steps section). Only include the part of the path up to the \u201cbin\u201d directory \u2013 as the example below. Correct it if necessary. (note: Java 11 is a requirement.) \n\n Update the Tomcat install path to C:\\Program Files\\Common Files\\AspenTech Shared\\Tomcat9.0.85. (Important: Do not use the default Apache Software Foundation path). \n\n \n Click \u201cInstall\u201d button and wait for the installation to complete:. \n \n Uncheck the two boxes: \n \n Click the \u201cFinish\u201d button.\n Open the new instance of Windows Services, and check the Apache Tomcat 9.0 Tomcat9 service configuration. Open the service Properties: \nVerify the \u201cStartup type\u201d is \u201cAutomatic\u201d (do not use the Delayed Start option). \nCheck the \u201cLog On As\u201d of the new tomcat. Verify or update as needed to correspond to the earlier Tomcat \u201cLog On As\u201d account noted earlier. \n Now you are ready to re-apply the earlier configurations. Do not start the Tomcat service yet. \n\nReapply earlier configuration to new Tomcat: \n\nIn the remainder of this document, this folder location: \u201cC:\\Program Files\\Common Files\\AspenTech Shared\\Tomcat9.0.85\u201d, will be referred to as .. \n Copy the saved AspenCoreSearch.war and solr.war into the folder \\webapps\\ . \nDelete the \u201cROOT\u201d folder from the new \\webapps\\ directory. \nCopy the files in the new \\conf\\ directory into a temporary location (e.g. C:\\Temp\\TC9.0.85Config\\). This is because some changes from these new files may need to be merged into existing configuration files.\nReplace the directories and files beneath the folder \\conf\\ and it\u2019s files, subdirectories and their files with the ones saved earlier from the previous Tomcat (i.e. from 9.0.27 or 9.0.56 or 9.0.78), except for tomcat-users.xsd, web.xml and Catalina.properties for which you should keep and use the new 9.0.85 version of those 3 files. So replace these files:\nAspenSearch.keystore \ncatalina.policy \ncontext.xml \njaspic-providers.xml \njaspic-providers.xsd \nlogging.properties \nserver.xml \ntomcat-users.xml \nCatalina\\localhost\\AspenCoreSearch.xml \nCatalina\\localhost\\solr.xml \nAdd the \\appdata\\ folder that was saved earlier from 9.0.27, 9.0.56 or 9.0.78 add into the new tomcat. \nDelete the \\appdata\\scheduler\\logs directory. (then they will be re-created again on startup) \nDelete the \\appdata\\scheduler\\derby folder. (then they will be re-created again on startup as needed) \nIf you are upgrading to Tomcat 9.0.27 directly to 9.0.85, then make the following edits to these files. Otherwise just confirm these changes are in place : \n\\conf\\catalina.policy : comment out the following line (i.e. put the double slash at the start of line: // ): \n\u201cpermission java.util.PropertyPermission \"org.apache.juli.AsyncLoggerPollInterval\", \"read\"; \n\\conf\\server.xml: search for the string specifying the AspenSearch.keystore, and if it exists, update the path to the new version: \nkeystoreFile=\"C:\\Program Files\\AspenTech\\Tomcat9.0.85\\conf\\AspenSearch.keystore\" \n\\conf\\server.xml: unless you have special requirements requiring AJP, then we recommend you disable AJP by doing the following for increased security. Comment it out using : \nComment out AJP connector. It is not used by search, and the AJP settings have changed. So comment out this string: \n \nSo it appears like this: \n \nIf you had made changes to Tomcat Tomcat9w settings in the previous version of Tomcat, now is a good time to check and if needed, reapply the same settings based on earlier screen-shots, using the Tomcat9w application in \\bin\\. \nStart ApatcheTomcat and SolrWindowsService from the windows Services. That will redeploy these two war files in the new Tomcat installation: \nsolr.war \nAspenCoreSearch.war \nWait until the AspenSchedulerStartUp.log is created in the /logs/ directory \nFrom the cmd window, run \u201ciisreset /START \nPost-installation Verification : \n\nThese steps are to confirm the Tomcat update was completed as expected, and search functionality is in place: \nNavigate to the directory C:\\Program Files\\Common Files\\AspenTech Shared\\Tomcat9.0.85\\logs and display the most recent of the Catalina*.log files. It should show the following Tomcat version Name, build date and version numbers: \nINFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version name: Apache Tomcat/9.0.85 \nINFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server built: Jan 5 2024 08:28:07 UTC \nINFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version number: 9.0.85.0 \n Open the AspenONE Process Explorer Administration tool (or navigate to http://localhost/processExplorer/webcontrols/PBitemdetails.asp?admin=true), and confirm no errors are displayed at the top of the \u201cAll\u201d tab, like so: \n In the Tags tab, confirm search data scanning functions as expected with the Tags tab by specifying scanning of a single known tag: \n \nAnd then click the \u201cScan\u201d button. Expect the results panel to display: \n100% of 1 tags completed \nScan completed with 1 tags published \nNavigate to Process Explorer home page (http://localhost/processexplorer/aspenONE.html#), and click on the magnifying glass: \n \nWait and verify searched data is displayed. (This may take a while displaying the spinner since some first-time initialization may occur.) \n \n Navigate again to the Process Explorer application (http://localhost/processexplorer/aspenONE.html#), and click on the Process Explorer icon: \n \nConfirm expected functionality when typing in a known tag into the search box. \n\n \n \n\nCVEs fixed in Tomcat when updating from Tomcat 9.0.27 to Tomcat 9.0.85:\nCVE # Impact level * Summary\n CVE-2023-46589 Important Request smuggling\n CVE-2023-45648 Important Request smuggling\n CVE-2023-44487 Important Denial of Service\n CVE-2023-42795 Important Information Disclosure\n CVE-2023-42794 Low Denial of Service\n CVE-2023-41080 Moderate Open redirect\n CVE-2023-34981 Important Information disclosure\n CVE-2023-28709 Moderate Apache Tomcat denial of service\n CVE-2023-28708 Important Apache Tomcat information disclosure\n CVE-2023-24998 Important Apache Tomcat denial of service\n CVE-2022-45143 Low Apache Tomcat JsonErrorReportValve injection\n CVE-2022-42252 Low Apache Tomcat request smuggling\n CVE-2022-34305 Low Apache Tomcat XSS in examples web application\n CVE-2022-29885 Low Apache Tomcat EncryptInterceptor DoS\n CVE-2021-43980 High Information Disclosure\n CVE-2022-23181 Low Local Privilege Escalation\n CVE-2021-42340 Important Denial of Service\n CVE-2021-33037 Important Request Smuggling\n CVE-2021-30640 Low Authentication weakness\n CVE-2021-30639 Important Denial of Service\n CVE-2021-41079 Important Denial of Service\n CVE-2024-21733 Important Information Disclosure\n CVE-2021-25329 Low Fix for CVE-2020-9484 was incomplete\n CVE-2021-25122 Important Request mix-up with h2c\n CVE-2021-24122 Important Information disclosure\n CVE-2020-17527 Moderate HTTP/2 request header mix-up\n CVE-2020-13943 Moderate HTTP/2 request mix-up\n CVE-2020-13935 Important WebSocket DoS\n CVE-2020-13934 Moderate HTTP/2 DoS\n CVE-2020-11996 Important HTTP/2 DoS\n CVE-2020-9484 Important Remote Code Execution via session persistence\n CVE-2020-1938 Important AJP Request Injection and potential Remote Code Execution\n CVE-2020-1935 Low HTTP Request Smuggling\n CVE-2019-17569 Low HTTP Request Smuggling\n CVE-2019-17563 Low Session fixation\n CVE-2019-12418 Moderate Local Privilege Escalation\n\n* Impact Level is defined by the Apache Tomcat Security Team. For level definitions, see: https://tomcat.apache.org/security-impact.html\n\n\nKey Words\n\nAspenONE Process Explorer\nTomcat 9.0.27 \nTomcat 9.0.56 \nTomcat 9.0.78 \nTomcat 9.0.85\nUpgrade\nCVE-2023-46589\nCVE-2023-45648\nCVE-2023-44487\nCVE-2023-42795\nCVE-2023-42794\nCVE-2023-41080\nCVE-2023-34981\nCVE-2023-28709\nCVE-2023-28708\nCVE-2023-24998\nCVE-2022-45143\nCVE-2022-42252\nCVE-2022-34305\nCVE-2022-29885\nCVE-2021-43980\nCVE-2022-23181\nCVE-2021-42340\nCVE-2021-33037\nCVE-2021-30640\nCVE-2021-30639\nCVE-2021-41079\nCVE-2024-21733\nCVE-2021-25329\nCVE-2021-25122\nCVE-2021-24122\nCVE-2020-17527\nCVE-2020-13943\nCVE-2020-13935\nCVE-2020-13934\nCVE-2020-11996\nCVE-2020-9484\nCVE-2020-1938\nCVE-2020-1935\nCVE-2019-17569\nCVE-2019-17563\nCVE-2019-12418", "keywords": null, "reference": null}, {"id": "000101876", "problem_statement": "How do you calculate the make-up flow rate for a recycle loop using the new MAKEUP block available in V14?", "solution": "There is a new MAKEUP block in V14. The MAKEUP block provides a way to balance the circulating material flow in recycle loops. Makeup adjusts the flow rate of its inlet makeup streams, and purges a portion of the feed flow, as necessary, in order to maintain specifications. Previously, users needed to calculate the make-up stream flow using a Calculator block. An example of the Calculator block method is illustrated in knowledgebase document 56791.\n\nTo specify the MAKEUP block, under To control, enter a specification for the Total mole flow or Total mass flow of the outlet stream. You may also specify the Component mole fraction or Component mass fraction for one or more components. Under Manipulate, select each Makeup and/or the Purge stream whose flow rate Makeup should manipulate. You may optionally specify an upper bound in mole or mass basis for the flow rate for each manipulated stream. The number of manipulated streams must match the number of To control specifications.\n\nThe attached example uses the same simulation as in the Calculator block make-up example and instead uses the MAKEUP block. The file will run in V14 and higher.", "keywords": null, "reference": null}, {"id": "000101742", "problem_statement": "Aspen DMC3 Builder currently does not have a built-in function to return the length of a string.", "solution": "A custom function formula can be defined using the custom calculation tool. This formula will take the string as an input for which the length needs to be determined, and return the length of the string as an integer value.\n\nAttached is the String_Length.xml file which can be directly imported into the Calculations node of DMC3 Builder to use this formula.\nTo import the calculation, navigate to the Calculations node of Controllers section, and click on Import Calcs button from the top tools ribbon, and select the attached String_Length.xml file.\n\nThe following pop-up dialog is displayed: \n Click Merge. A new formula with the name StrLength will be added under the Formulas view: \n \n\nAnd an example of input calculation with the name TestStrFormula that can be switched to test mode to confirm the formula:", "keywords": null, "reference": null}, {"id": "000101741", "problem_statement": "Aspen DMC3 Builder currently does not allow a direct datetime comparison within an IF clause.", "solution": "To overcome this, a custom function formula can be defined using custom calculation tool. This formula (aka function) will take two input parameters, DateTime1 and DateTime2, which are the two datetimes to be compared, and will return 3 possible outcomes based on the comparison:\n\n- 0: if both datetimes are equal\n- 1: if both DateTime1 is newer than DateTime2\n- 2: if DateTime2 is newer than DateTime1\n\nThe formula for the same is as follows:\n'DateTime Comparison Formula \n'How to use it in a calc: DTC_Res=DTComparison(DateTime1,DateTime2) \n'Possible results: \n'- 0: if both datetimes are\u202fequal \n'- 1: if both\u202fDateTime1\u202fis newer than\u202fDateTime2 \n'- 2: if\u202fDateTime2\u202fis newer than\u202fDateTime1 \n'Note: DateTime1 and DateTime2 are both of DateTime format. \n\nReturnValue = DT1 \nReturnValue = DT2 \n\nif(Year(DT1) > Year(DT2)) then \n RetVal = 1 \nelseif ( Year(DT2) > Year(DT1)) then \n RetVal = 2 \nelseif (Month(DT1) > Month(DT2)) then \n RetVal = 1 \nelseif (Month(DT2) > Month(DT1)) then \n RetVal = 2 \nelseif (Day(DT1) > Day(DT2)) then \n RetVal = 1 \nelseif (Day(DT2) > Day(DT1)) then \n RetVal = 2 \nelseif (Hour(DT1) > Hour(DT2)) then \n RetVal = 1 \nelseif (Hour(DT2) > Hour(DT1)) then \n RetVal = 2 \nelseif (Minute(DT1) > Minute(DT2)) then \n RetVal = 1 \nelseif (Minute(DT2) > Minute(DT1)) then \n RetVal = 2 \nelseif (Second(DT1) > Second(DT2)) then \n RetVal = 1 \nelseif (Second(DT2) > Second(DT1)) then \n RetVal = 2 \nelse \n RetVal = 0 \nend if \n\nReturnValue = RetVal \n Attached is the DTComparison.xml file which can be directly imported to Calculations node of DMC3 builder to use this formula.\nTo import the calculation, navigate to Calculations node of Controllers section, and click on Import Calcs button from the top tools ribbon, and select the attached DTComparison.xml file.\nThe following pop-up dialog is displayed: \n Click Merge, and a new formula with DTComparison name will be added under the Formulas view.\n\nThe screenshot below shows how this formula can be used in a calculation:", "keywords": null, "reference": null}, {"id": "000064840", "problem_statement": "How can I change the Test Mode property for Aspen Calc calculations using an Aspen SQLplus query?", "solution": "In the Aspen SQLplus Query Writer use View /", "keywords": "Aspen Calc\nAspen SQLplus\nSandBoxMode\nSand Box\nTest Mode", "reference": "s... menu option to add references to the CalcScheduler and Aspen Calc Components:\n Click on the OK button and copy the following code into the query writer:\n local calccmd, CalcName, Calculation, FolderName, GroupName;\n calccmd = CreateObject('CalcScheduler.CalcCommands');\n FolderName = 'Folder1'; --Change folder name as needed\n For each CalcName in calccmd.GetCalculationList Do\n If CalcName Like '%'||FolderName||'%' Then\n Calculation = calccmd.GetCalculationObject(CalcName);\n Calculation.SandBoxMode=0; --Use 0 for non-test mode and 1 for test mode\n End;\nEnd;\nLeaving the FolderName blank (FolderName='';) changes all Aspen Calc calculations. Set Calculation.SandBoxMode to 0 to disable test mode, and set Calculation.SandBoxMode to 1 to enable test mode.\n\nIn test mode calculations will not write back to the database."}, {"id": "000101862", "problem_statement": "El siguiente art\u00edculo proporciona una gu\u00eda para identificar problemas de Cim-IO. Una vez identificados los problemas, le recomendamos que busque otros art\u00edculos que proporcionen m\u00e1s informaci\u00f3n sobre sus mensajes de error o problemas espec\u00edficos. Tambi\u00e9n puede compartir sus resultados con el equipo de atenci\u00f3n al cliente para una mayor investigaci\u00f3n.", "solution": "Abra el Administrador de InfoPlus.21 y verifique si TSK_A, TSK_M y TSK_U (si existe) se est\u00e1n ejecutando. Revise sus archivos de error/salida (si hace doble click en la tarea, deber\u00eda ver un icono de carpeta en el lado derecho. Eso le permite abrir estos mensajes).\n\u00bfHay alg\u00fan mensaje de error/advertencia en el Visor de Eventos (Event Viewer) relacionado con el problema?\nAbra el Administrador de tareas, vaya a la pesta\u00f1a Detalles y aseg\u00farese de que el AsyncDlgp.exe est\u00e9 en funcionamiento.\n\u00bfEst\u00e1 configurado correctamente Cimio Connection Manager?\n\u00bfEst\u00e1 configurado correctamente el Administrador de Interfaces de Cimio?\n\u00bfHay alg\u00fan mensaje de error en los campos IO_LAST_STATUS IO_LAST_STATUS_DESC de cualquier get transfer? Por ejemplo, un mensaje Wait For Async.\nSi est\u00e1 utilizando OPC UA, revise los certificados\nCim-IO OPC UA interface processes (read/write/unsol) rejects the untrusted certificate of the target OPC UA Server https://esupport.aspentech.com/S_Article?id=000097431\nRevise que los puertos est\u00e9n configurados de manera correcta\nFirewall port requirements for Aspen MES Applications https://esupport.aspentech.com/S_Article?id=000049729\nHow to determine if a port is being blocked by a firewall. https://esupport.aspentech.com/S_Article?id=000087535\nWhat TCP ports must be open through a firewall between IP.21 and CIM-IO server? https://esupport.aspentech.com/S_Article?id=000067947\nSi est\u00e1 utilizando OPC DA, confirme si DCOM est\u00e1 configurado correctamente.\nDCOM considerations when OPC Server is remote to Aspen Cim-IO for OPC interface. https://esupport.aspentech.com/S_Article?id=000086727\n\nKey Words\n\nCimio\nGu\u00eda\nProblema\nError", "keywords": null, "reference": null}, {"id": "000101861", "problem_statement": "The following article provides a simple troubleshooting guide for Cim-IO issues. Once the issues are identified, we recommend looking for other articles that provide more information on your specific error messages/problems. You may also share your results with the Customer Support Team for further investigation.", "solution": "Open the InfoPlus.21 Manager and check if TSK_A, TSK_M, and TSK_U (if existent) are all running. Review their Error/Output files (If you double click the task you should see a folder icon on the right side. That allows you to open these messages). \nAre there any error/warning messages on the Event Viewer related to the issue?\nOpen Task Manager, go to the Details tab and make sure the AsyncDlgp.exe is up and running.\nIs the Cimio Connection Manager configured correctly?\nIs the Cimio Interface Manager configured correctly?\nAre there any error messages on the IO_LAST_STATUS, IO_LAST_STATUS_DESC fields of any get transfers? For example, a Wait For Async message.\nIf you are using OPC UA, please review if all certificates are trusted\nCim-IO OPC UA interface processes (read/write/unsol) rejects the untrusted certificate of the target OPC UA Server https://esupport.aspentech.com/S_Article?id=000097431\nAre cimio ports configured properly? \nFirewall port requirements for Aspen MES Applications https://esupport.aspentech.com/S_Article?id=000049729\nHow to determine if a port is being blocked by a firewall. https://esupport.aspentech.com/S_Article?id=000087535\nWhat TCP ports must be open through a firewall between IP.21 and CIM-IO server? https://esupport.aspentech.com/S_Article?id=000067947\nIf you are using OPC DA, please confirm if DCOM is configured correctly.\nDCOM considerations when OPC Server is remote to Aspen Cim-IO for OPC interface. https://esupport.aspentech.com/S_Article?id=000086727\nKey Words\n\nCimio\nTroubleshooting\nGuide\nError", "keywords": null, "reference": null}, {"id": "000077087", "problem_statement": "How do you model a liquid whose viscosity is independent from composition, but dependent on temperature?\nThe model desired is:\nmu = exp(A + B/T)\nwhere the units for viscosity are Pa s. The parameters A and B should be available from the GUI.", "solution": "This viscosity model is not included in the models available for Aspen Plus; however, it is possible to write a user subroutine for viscosity.\nSee the Fortran code for additional information about how the code is written.\n\nSteps to Use the Subroutine:\nModify an existing Property Method to use route MULMXUSR for MULMX.\nAdd a user parameter MUUSR, T-dependent with two elements.\nUse attached Excel file to fit data.\n Enter values for A and B for the first component in the component list.\nMUUSR/1 = A\nMUUSR/2 = B\n\nCompile attached Fortran code for MUL2U using the command ASPCOMP from an Customize Aspen Plus window.\nNote: since it is not possible to enter the A and B parameters without associating them to a component, the attached subroutine assumes they have been defined for the first component in the list. Defining different values for other components will not have any effect. Not defining a value for the first component will yield a non-defined behavior. If you need the code to work independently from the order of the components in the component list, you can hard code the values of A and B in the Fortran source.", "keywords": "Fortran user subroutine Viscosity Transport Properties\nMUL2U", "reference": null}, {"id": "000075116", "problem_statement": "AsyncDLGP fails to reconnect after OPC Server is rebooted.", "solution": "When the AsyncDLGP process detects a loss of communication with the OPC Server, it will close the gate to accept any further request for data. It will then try to reconnect to the OPC Server again. Once communication is restored, it will then re-open it\u2019s gate to accept request for data. Once the gate is closed, it will be displayed as Red in Cim-IO Interface Manager.\n \nSolution\n\nrecommended is to enable the \u201cCheck health of server processes\u201d for the interface. There is a rule in Cim-IO Manager, if the \u201cgate\u201d of the AsyncDLGP process is closed, it will keep it active until it reconnects to the OPC Server and open its \u201cgate\u201d. We would like to disable this rule so that Cim-IO Manager will kill the current AsyncDLPG process that lose communication with the OPC Server and start a new AsyncDLGP process instead.\n\n \n\nThe proposed solution may help to avoid the manual intervention required if communication between CIMIO and OPC goes down. The current AsyncDLGP process will be terminated and a new AsyncDLGP process will be created. You need to create a registry key as per below.\n\u2013 CheckHealthRestartDlgp (DWORD - v12.2 and above) and set the value to 1\n\u2013 HealthCheckSupported (String - v11 and below) and set the value to NO as per below screenshot.\n(HKLM\\SOFTWARE\\Wow6432Node\\AspenTech\\CIM-IO to OPC Interface\\)\n \nAfter entering the new global data value in the registry, you need to restart every instance of CIMIO for OPC.\n\nv12.2 and above (Aspen InfoPlus21 V12.2.2.2 ECR_00922736 )\n\n\nv11, and V10 with latest available ECR.\n\nIn case of need, a video tutorial on \"how to create the string value\" can be found below:", "keywords": "HealthCheckSupported\nAsyncDLGP reconnect\nOPC Server Reboot", "reference": null}, {"id": "000101832", "problem_statement": "How to edit the value of a field on Aspen InfoPlus.21 using AspenONE Process Explorer.", "solution": "Open Aspen Process Graphic Studio.\nSelect File, New, and create a New Project\nSelect the Integer Data icon and draw a rectangle on the drawing area.\nRight click the rectange and select Properties.\nOn the General tab, check the Allow Data Entry button on the Data Entry section.\nGo to the Data tab and select the Data Source.\nType the tag\u00b4s name on the Name section and add a space after. Next, type the name of the field you\u00b4d like to work with. Note: Make sure it has the exact same name than the field on Aspen InfoPlus.21.\nClick the OK button.\nGo to File, Save As, and save the project with the title of your preference.\nGo to File, Publish, to AspenONE Process Explorer, and click OK.\nOpen AspenONE Process Explorer and look for the published file.\nClick the rectangle and modify the number to your preference. \nOnce you refresh Aspen InfoPlus.21 you should be able to see the new value.\n\n\n\nKey Words\n\nAspenONE Process Explorer\nModify\nEdit\nField", "keywords": null, "reference": null}, {"id": "000101660", "problem_statement": "Within an Activated Economic Analysis in Aspen Plus, by default, Aspen Process Economic Analyzer (APEA) uses 8760 hr/yr (24hr x 365) as the basis for calculating the total profits of an Operating Year.", "solution": "Although you have modified the Operation Year time in the embedded Template (.IZT file), Aspen Plus will still use the default value, 8760 hr/yr. So you must overwrite this input by going to:\n\n1) Setup, in the Control Panel\n\n2) Global, Operational Year.", "keywords": "Operational Year, Template, Cost Options, Cost per Hour", "reference": null}, {"id": "000101659", "problem_statement": "When you set the price through the Stream Price form, the values are not correctly set, and they are not really considered.", "solution": "Set the price directly in the stream specs, the values are correctly added to the Economic Analysis and it works as expected. \n\n\n\nKeyWords\n\nStream Price, Operational Year, Costing, Material Stream", "keywords": null, "reference": null}, {"id": "000099758", "problem_statement": "Why do my Unit Sets update/copy when I open a new HYSYS simulation file", "solution": "When a custom unit set is created, that unit set can be named by the user and then used in multiple simulation files.\nAs can be seen below, unit sets will duplicate/copy themselves occasionally as a user opens multiple simulation case files.\n\n\nThe reason this may occur, is that if any of the unit types are changed within that user unit set, for example NewUser8a and then saved on one simulation file, that unit set will be updated corresponding with the change.\nBut if another simulation file is set to use NewUser8a, when that simulation is opened, the duplicate NewUser8b will be made.\nThis is because in the latter simulation file, the unit type change was not made so to prevent units being reported incorrectly for each simulation that a unit set is used on, if that unit set is changed, it will duplicate on any subsequent files to keep the original unit type settings.", "keywords": "Unit Set, Copy", "reference": null}, {"id": "000094009", "problem_statement": "How to convert a template file (.tpl) to a HYSYS file (.hsc)?", "solution": "A template flowsheet is a normal HYSYS flowsheet with some additional information contained in its main properties. It uses the file extension .tpl when it is saved rather than the regular .hsc. A template allows the flowsheet to be added as a sub-flowsheet in other HYSYS cases. You can create a new template by going to File | New | New Template. You can also convert an existing HYSYS case to a template by going to Customize | Convert to Template in the simulation environment. \n \nOnce a HYSYS case is saved as a template file, there is no direct way to convert a .tpl file back to a .hsc file. The File | Save As window will only show the file extension .tpl, not .hsc. For this reason, we recommend saving a copy of your file as .hsc before saving it as .tpl. If you need to convert a template file back to a HYSYS file, then you can try to manually change the file extension from .tpl to .hsc, but this usually does not work. \n\nAs a workaround, you can follow the steps below to convert a template file to a HYSYS file.\n\n1. Open HYSYS and create a new HYSYS case.\n2. Add a dummy component list and fluid package to enter the simulation environment.\n3. In the simulation environment, add a sub-flowsheet and select the \"Read an Existing Template\" option.\n\n\n\n4. Select the template file that you want to convert to a HYSYS file. \n5. In the properties environment, delete the dummy component list and fluid package created in step 2.\n6. At this point you can save the file as .hsc and it will contain all the information from the template in a sub-flowsheet. To move the process out of the sub-flowsheet, follow the remaining steps.\n7. Enter the sub-flowsheet. Select all objects. Right click and select Copy (or press Ctrl+C).\n8. Return to the main flowsheet. Right click and select Paste (or press Ctrl+V).\n9. Another option is to right click on the sub-flowsheet icon and select the option \"Move Contents to Owner Flowsheet\". \n10. Delete the sub-flowsheet and save the file as a HYSYS case with a .hsc extension.", "keywords": "*.tpl, *.hfl, *.hsc, file conversion, file type, file extension", "reference": null}, {"id": "000101850", "problem_statement": "How to generate summary report for all scenarios created in Aspen Flare System Analyzer?", "solution": "A scenario defines a set of source conditions (flows, compositions, pressures and temperatures) for the entire network. The design of a typical flare header system will be comprised of many scenarios for each of which the header system must have adequate hydraulic capacity.\n\nThe scenario management allows you to simultaneously design and rate the header system for all of the possible relief scenarios.\n\nAfter running a simulation case in Flarenet, user can view the results of the selected source for all scenarios by clicking on the Results folder in the navigation pane | Scenario Summary | Click on Excel option for generating the summary report in MS Excel.", "keywords": "Relief, Scenario, Source, Condition, Result", "reference": "https://esupport.aspentech.com/S_Article?id=000050577"}, {"id": "000100880", "problem_statement": "If the ADSA configuration is not properly done or if there is a communication issue between the servers, you will not be able to visualize Aspen Watch data from the Aspen APC Web Interface or APC Web page. A clear guide is needed to configure the servers connection for the first time and should include troubleshooting for common connection or configuration issues.", "solution": "The way that data is transmitted between the Aspen Watch (AW) server and the PCWS (Production Control Web Server) is through an Aspen product called ADSA, which uses the TCP/IP protocol to transfer the data and display it as History Plots, Control Objectives, KPIs, Reports, etc.\n\nTo ensure that the connection is successful the following should be configured between both servers:\nFirst ensure that a good connection exists between the machines, as follows:\nOn the AW server, open a command prompt window\nType: ping and press enter\nThere should be a response confirming that the PCWS server has replied within a short time\nRepeat this on the PCWS server and ping the AW server; ensure the response confirms that a reply was received\nNext ensure that both servers have the same clock time and same time zone. Note the following:\nA time difference more than 2-5 minutes will cause the connection to fail.\nA Network Time Protocol is recommended.\nIf there is a firewall between these two servers, make sure that the ports highlighted later in this article are open and allow connections bidirectionally. Also in this case it is necessary to configure InfoPlus.21 to send the data through port 10016, please refer to this Knowledge Base article: https://esupport.aspentech.com/S_Article?id=000052211\nTo begin the configuration, proceed as follows:\nLog on to the Aspen Watch server (as an example for this article, called AWServer01) and open the ADSA Client Config Tool.\nOn the ADSA tab leave everything at the default values.\nLeave the Directory Server pointing to itself, and the protocol as Web.\nAt the Configuration tab select Public Data Sources.\nAdd a data source with the name of the Aspen Watch server (on our example, AWServer01)\nAdd the following five services with settings as default:\n\nAspen DA for IP.21\nAspen Process Data (IP.21)\nAspen Process Data Service\nAspen SQLplus service component\naspenONE Process Explorer Comments\nFor services 1, 3, and 4 make sure that the Host Name of the service is set as the Aspen Watch server name. Notice that it requires two ports: 52007 (3) & 10014 (4).\nThese are the ports that must be open if there is a firewall between the Aspen Watch server and the PCWS.\n\n\n\nClick OK to save everything and close the ADSA Client Config Tool.\n\nLog on to the PCWS and open the ADSA Client Config Tool.\nOn the ADSA tab again leave everything as default\nAt the Configuration -> Public Data Sources Add the Aspen Watch server as a data source\nAdd and configure the same 5 services with the Host Name, this time pointing to the Aspen Watch server\nIMPORTANT: \nIn both machines (AW and PCWS) Public Data Sources configuration must match exactly (same server name, upper case, lower case, etc.)\nAlso, both must point to the Aspen Watch server.\nClick OK to save everything and close the ADSA Client Config Tool.\nFinally, perform an IIS reset (still in PCWS) as follows:\nFrom the Windows Start button open a Command Prompt window as administrator.\nType iisreset.\nClick enter.\nWait for the message \u201cInternet services successfully restarted\u201d.\nAfter all steps are completed correctly, check the following:\nOpen the Aspen APC Web Interface in an internet browser\nNavigate to the History tab\nCheck the ADSA Information and confirm that there is a Good connection to the Aspen Watch server.\nYou should also be able to start using the Aspen Watch features within the APC Web Interface / APC Web site. For example,\nEnsure that data collection is running for the controller(s) in question - this can be done in the PCWS History tab by expanding the AW Maker, then Collection Status entry on the left pane. If a controller is collecting data the Status will be Success and the Last Run and Last Scan timestamps will be close to the current system time (within 1 controller execution time).\nAt the Online tab, expand any DMC3 or DMCplus controller and click on the name of any controller variable (MV, FFW or CV)\nIf collection is running and the ADSA connection is configured correctly, a drop-down context menu will be displayed.\nOtherwise the detail display for the variable will be shown\nIn the context menu above, select History Plot to show a trend of the selected variable\nSimilarly the other Performance monitor options may be selected: Control Objective, Inspector, etc.", "keywords": "PCWS, Aspen Watch, ADSA, atcontrol, aspenapc, history plot, control objectives", "reference": null}, {"id": "000054330", "problem_statement": "How do I setup a feedforward controller in Aspen HYSYS through the HYSYS Spreadsheet?", "solution": "A design using a HYSYS Spreadsheet as part of the feedforward controller can be used in Aspen HYSYS. Before implementing the feedforward controller, take note of the feedback controller\u2019s output and the disturbance measurement at various levels of the disturbance. The user can then use this relationship to set up the curve in a HYSYS spreadsheet. The spread sheet provides a mathematical relationship between the measured variable and the controlled variable.\n In the attached simulation the primary disturbance in this case is the temperature of the Feed stream. This disturbance is simulated through a transfer function, and by adjusting the duty in the chiller we can compensate for changes in this stream. Feed forward controllers act before the disturbance effects the process; they must know how the disturbance will affect the process, and also the magnitude of the disturbance. Therefore, in the HYSYS spreadsheet the user can create a simple mathematical process model which will allow the controller to determine the amount of correction that must be applied to counteract the effect of the temperature disturbance.", "keywords": "Feedforward, HYSYS Dynamics", "reference": null}, {"id": "000101845", "problem_statement": "Why system predicts different results for the outlet temperature of the Aspen Hydraulics Complex Pipe compared to the outlet temperature of the Pipe Segment unit operation and Aspen Flare System Analyzer?\n\nWhen we note the outlet temperature of the Aspen Hydraulics Complex Pipe compared to the outlet temperature of the Pipe Segment unit operation. The Flarenet case agrees with the Pipe Segment (unusual because the AH sub-flowsheet is based on the Flarenet solver). Ideally user would expect these to be the same if there is no energy loss in the model.", "solution": "The main difference you are observing is due to the Kinetic Energy (KE) balance for enthalpy calculation.\nThe Aspen HYSYS Hydraulics file will always consider the KE balance for enthalpy calculation while the HYSYS pipe segment does not consider the KE.\n\nFor Flarenet it is optional (Calculation / Options / General Tab / Energy balance / Active the check box for Include KE).\n\n Once you activate the KE term for energy balance, then Aspen Flarenet results will match with Aspen Hydraulics (Complex as well as normal pipe).\n\nPlease note that such difference is noticeable when the Mach Number is more than 0.3.", "keywords": "Kinetic Energy, Mach Number, Results, Enthalpy", "reference": null}, {"id": "000065035", "problem_statement": "Suppose we have this model:\n Model test\n glob as global realvariable;\n x as realvariable;\n y as realvariable;\n x = y;\n glob = x;\nEnd\n\nHow to access the value of \"glob\" global variable in a model script?", "solution": "Note that this simple model script does not work. It works if the script is created in the Flowsheet, but not if the script is created in the model.\n MsgBox glob.value\n\nThis is because the global variable actually belongs to the simulation, not to the instance of the model.\n The trick is to use the Application object to go back up to the flowsheet.\n Script:\n ' access the global variable \"glob\" from here...\n\nMsgBox Application.Simulation.Flowsheet.Global.glob.value\n This screen capture shows where a model script can be created and how to invoke it. This is very useful when you want to invoke this script from any block created using that model.", "keywords": "global, model script", "reference": null}, {"id": "000101844", "problem_statement": "What are the steps to re-configure Aspen Unified databases if deleted from Aspen Unified Configuration Manager?", "solution": "The information associated with the master database resides with Aspen Unified Config file even if you deleted the previously configured master database.\nTo re-configure the master database, we must change/delete the values from the config file. For that go to C:\\ProgramData\\AspenTech\\AspenUnified in this path you can find AspenUnified.config file. You can change/delete the values for the Master Database there.\n\n\n\nIn the same location, you can delete the cache \u201cMeshSettingsCache.json\u201c file.\nThen you can restart the \u201cAspen Unified Agent Supervisor Service\u201d from services.\nAfter that, you can configure the Aspen Unified Databases by using the Aspen Unified Configuration Manager.", "keywords": "Aspen Unified Configuration Manager, Database, Master, Input, Catalog, Results, AspenUnified.Config", "reference": null}, {"id": "000101842", "problem_statement": "How to set up a New User account for Aspen Unified without \u2018sysadmin\u2019 privilege and a Network Service account?", "solution": "Establishing a new user account is crucial for organizations in daily operations to mitigate security risks and circumvent unnecessary \u2018sysadmin\u2019 privileges. In Aspen Unified you can set up a new user account by following the steps below.\nAssign \u2018Aspen Unified Agent Supervisor Service\u2019 to a new user account that you have created.\nGo to IIS, Application pools, and search for the AspenUnified pool, which currently is login as Network Service\nChange to the desired New User Account\nStop and Start the application Pool\nAdd the New User Account as the DB owner of Master, Input, Catalog and Results databases (others if added Catalog, AUS) in SQL Server Management Studio under Security | Logins\nMake sure that the new user has Read and Write access to the following folder: C:\\ProgramData\\AspenTech\\AspenUnified.\nRestart Aspen Unified Agent Supervisor Service", "keywords": "Aspen Unified, New User Account, Network Service, sysadmin, db owner, Master, Input, Catalog, Results", "reference": null}, {"id": "000101837", "problem_statement": "Why am I getting \u201cPublish Results Succeeded with Error\u201d while publishing the Aspen Unified Scheduling model?", "solution": "The root cause of this error is due to the Database being full and no disk space is available.\nTo eliminate this error users, have to delete the publish log information from the AUS Results database and refer to the \u201caus.PublishLog\u201d table. This table works as a master table and when the user clears the information from this table the associated results tables which store the published information get cleaned. Use the following SQL query to delete the information from \u201caus.PublishLog\u201d table.\nDELETE FROM [aus].[PublishLog]\nYou can identify this table in the AUS Results Database.", "keywords": "AUS Publish Fail, AUS Publish, AUS Results Database", "reference": null}, {"id": "000101255", "problem_statement": "Known Issue: \u2018Simulate to Date\u2019 automations like \u2018SimulateToDateNow\u2019 and \u2018GetTankPropAtTime\u2019 does not work when you have only one period created at the beginning of schedule between first and second day.\nHere are the steps to reproduce.\nWhen you have a single period is created at the beginning of schedule between first and second day then if you \u2018Simulate to Date\u2019 for next date\nThen it will go to first period instead of next day.", "solution": "This discrepancy can be eliminated by two different ways.\nDisable the \u2018Simulate Lite\u2019 option from Menu. This option is always enabled while using Aspen Petroleum Scheduler\n By modifying \u2018User Settings\u2019. You can increase the number of days option from \u201cDays before Simulate Lite\u201d", "keywords": "Simulate to Date, SimulateToDateNow, GetTankPropAtTime, Simulate Lite, APS Settings", "reference": null}, {"id": "000101225", "problem_statement": "After installing Aspen Unified PIMS V14 CP1, the error message shown in screenshot below is encountered when browsing to it.", "solution": "The root cause of this error code is meaning access to the requested resource is forbidden. The executables mentioned in the error were not granted appropriate level of the Execute permission. For example, this problem occurs if you try to request an CGI page from a directory that does not have the scripts handler permission enabled. Below are the appropriate steps to enable scripts permission.\nGo to Internet Information Manager (IIS) from Start menu or from Run menu.\nThen see for \u201cAspenUnified\u201d site from Default Web Site Menu.\n Double Click on \u201cHandler Mappings\u201d then select disabled feature (for ex. ISAPI.dll) | then click on \u201cEdit Feature Permissions\u201d and select \u201cExecute\u201d option and click \u201cOK\u201d\n Then disabled executable is enabled and you can eliminate the error.", "keywords": "HTTP Error 403.1, Forbidden Error, Not allow executables to run", "reference": null}, {"id": "000101221", "problem_statement": "How to migrate from PIMS Distributed Recursion (DR) to Aspen Unified PIMS (AUP)?", "solution": "This article is combination of two important steps which you will need to perform for the migration of your existing PIMS-DR model to Aspen Unified PIMS Model.\nMigrate PIMS DR model to PIMS AO:\nTo migrate planning model to Aspen Unified PIMS it should be PIMS-AO compatible and working fine in PIMS-AO. In order to migrate your PIMS-DR model to PIMS-AO you can go through attached jump start document, or you can access following KB article link.\n\nTopic: Jump Start: Migrating Aspen PIMS Distributed Recursion (DR) Models to Advanced Optimization (AO)\n\nKB Link: https://esupport.aspentech.com/S_Article?id=000075370\n\nAfter your migration from PIMS-DR to PIMS-AO is finished then you can follow the steps below.\n Migrate PIMS AO model to AUP:\nTo migrate well working PIMS-AO model to Aspen Unified PIMS you can go through attached document. The attached document consists of detailed information like pre-requisites, step-by-step guide and common issues during the AUP migration process.\n\nTopic: PIMS-AO to AUP Migration Guide", "keywords": "PIMS-DR to PIMS-AO, PIMS-AO to AUP, Migrate an existing PIMS-AO model", "reference": null}, {"id": "000100865", "problem_statement": "Why am I getting \u201cMultiple-step OLE DB operation generated errors\u201d message while publishing APS or MBO model?", "solution": "When user publish the results from APS-MBO application then the results data will be populated in the \u2018_ \u2018 tables like _EVENTS or _EVENTS_MBO etc. if there is difference in architecture between the tables from baseline database to the model database like missing columns especially from the results table then user will get this error.\n\nTo get rid of this error message user need to use \u201cDBUpdate\u201d utility which is present in \u201cC:\\Program Files (x86)\\AspenTech\\Aspen Petroleum Scheduler\u201d directory . User can follow the steps below.\n Open the DBUpdate utility and select Baseline database and Client model.\nClick on \u2018Validate\u2019 button.\nIt will give missing information after database validation.\nClick on \u201cUpdate\u201d button. Then it will generate the queries in APS/MBO working folder.\nUser can run the generated queries from working folder on the client database, so that it will automatically create the missing information for the tables it highlighted during validation for client database.", "keywords": "Multiple-step OLE DB Error, Publish Results, Publish All Results, Tables missing information, DBUpdate", "reference": null}, {"id": "000100797", "problem_statement": "What is the sequencing logic for long tag names generation in Aspen Unified PIMS if we uncheck the \u201cPreserve PIMS short tag names\u201d option while migrating an existing PIMS model?", "solution": "An existing PIMS model migration process AUP will allow user to preserve the short tag names from existing PIMS model but if we uncheck this option then it will convert existing PIMS tags to long tag names. During conversion, the following occurs:\nTable: BUY in PIMS (Short Tags)\n\n\n\nAfter importing to AUP (Long Tags)\n\n\n\nThe prefixes like VBAL, RBAL, CCAP etc. can be converted as follows:\nShort tag names: VBALCCD / RBALCCD / CCAPCCU\nLong tag names: VBAL:Cat Diesel (up to 20 character) / RBAL:Cat Diesel / CCAP:Cat Cracker BPD\nThis long tag names conversion logic can be executed by going through series of PIMS table sequentially by following order of PIMS tables below:\nNote that P \u2013 Periodic PIMS, M \u2013 Multisite PIMS and X \u2013 Multisite and Multiperiod PIMS\nOrder PIMS Tables\n1 MODELS - (M and X only)\n2 PERIODS\n3 DEPOTS - (M and X only)\n4 SCALE\n5 CAPS\n6 PROCLIM\n7 MODES - (M and X only)\n8 MARKETS - (M and X only)\n9 MARKTGRP - (M and X only)\n10 GROUPS\n11 BLENDS\n12 ALTTAGS\n13 LOCTAGS - (M and X only)\n14 BUY\n15 SELL\n16 UTILBUY\n17 UTILSEL\n18 ADDITIVE\n19 SUPPLY - (X only)\n20 DEMAND - (M and X only)\n21 GSUPPLY - (M and X only)\n22 WSPECS\n23 PGUESS\n24 PCALC\n25 PCALCB\n26 CRDDISTL\n27 CRDCUTS\n28 CRDBLEND\n29 SWING - (Not supported per functional spec)\n30 NEWCUT\n31 ASSAYLIB\n32 SUBMODS\n33 ABMLSUBF\n34 GASES\n35 UPOOL\n36 VPOOL\n37 BLNSPEC\n38 BLNCAP\n39 BLNMIX\n40 BLNTARG\n41 BLNPROP\n42 GBLNMIX - (M and X only)\n43 GBLNSPEC - (M and X only)\n44 PLANTGRP - (M and X only)\n45 DEMALLOC - (M and X only)\n46 TRANSFER - (M and X only)\n47 RATIO\n48 INTERACT\n49 INDEX\n50 DISABLE\n51 CURVE\n52 NONLIN\n53 PBONUS\n54 ROWS\n55 RFG\n56 BOUNDS\n57 DINV - (X only)\n58 XBOUNDS\n59 MIP\n60 GASPLANT\n61 PINV\n62 EXPERT - (Not supported per functional specs)\n63 UNITS\n64 PBLNPER - (Not supported per functional specs)\n65 PBLNMIX - (Not supported per functional specs)\n66 PBLNSPEC - (Not supported per functional specs)\n67 ACCUQUAL - (Not supported per functional specs)\n68 GBLNPER (X only) (Not supported per functional specs)\n69 ABML\n70 ABMLMAP\n71 ABMLOPT\n72 RECEIVE - (Not supported per functional specs. Supported for periodic models only (P), not in table dictionary)", "keywords": "Long tag names, Short tag names, Migrate an existing PIMS model, Preserve PIMS short tag names", "reference": null}, {"id": "000097146", "problem_statement": "What is the difference between enabling Replication for IP_VALUE or IP_INPUT_VALUE?", "solution": "The most common option is to replicate IP_VALUE and the reason why is because all the values from the first InfoPlus.21 Server (Publisher) have already passed through the Data Compression Algorithm (IP_DC_SIGNIFICANCE and IP_DC_MAX_TIME_INT). Therefore, you do not need to configure those fields for the second InfoPlus.21 Server (Subscriber). \n\nIf it is decided to enable Replication for IP_INPUT_VALUE, you have to be sure that every tag in both InfoPlus.21 Servers (Publisher and Subscriber) has the same configuration for IP_DC_SIGNIFICANCE and IP_DC_MAX_TIME_INT. If this is not configured, the information will NOT be the same between the servers.\n\nAs you are replicating IP_INPUT_VALUE, in the second InfoPlus.21 Server (Subscriber) the values are going to pass through the Data Compression Algorithm configured on that server, and consequently, if there exists a difference in IP_DC_SIGNIFICANCE and IP_DC_MAX_TIME_INT compared to the first InfoPlus.21 Server (Publisher), different values are going to be moved to IP_VALUE.\n\nThe mapping fields map_currentvalue, map_currenttimestamp, and map_currentquality must reflect if IP_Value or IP_INPUT_Value is being replicated otherwise the wrong values will be used in desktop applications.", "keywords": "Replication\n\nIP_VALUE\n\nIP_INPUT_VALUE", "reference": null}, {"id": "000101836", "problem_statement": "How to check whether a system is square or not in Utilities Planner?", "solution": "In Aspen Utilities Planner, one should keep square condition as you build the flowsheet, which allow to run EO solver for steady state simulation. In order to check whether a system is square or not, user should analyse the degrees of freedom value.\n\nTo view degrees of freedom information for a block or a stream:\n\nIn the Flowsheet window, right-click a block or stream and then click Properties.\n \n The Block Properties form appears, showing detailed information about the block or stream.\n\n \nClick OK to close the form.", "keywords": "Square condition, DOF, Properties, Equations, EO model", "reference": null}, {"id": "000101835", "problem_statement": "How to check the profile or tags for utility services added in Aspen HYSYS simulation?", "solution": "In Aspen HYSYS, Process Utilities Manager is used to define and apply utility tags to material and energy streams which designate the streams for calculation of energy and utility-related consumption and costs, and emissions such as CO2 and other \"greenhouse\" gasses.\n\nThe utilities manager is available under the Home tab | Simulation | Utility Manager\n \n To assign a utility tag to an energy stream:\nDouble-click the energy stream.\nOn the Worksheet tab | Stream page, select a utility type from the Utility Type drop-down list.\nClose the window.", "keywords": "Utility tag, Inlet T, Cost, Utility Manager, energy stream", "reference": "https://esupport.aspentech.com/S_Article?id=000097867"}, {"id": "000101134", "problem_statement": "How do I use the SLM Configuration Wizard in V11-14 to connect to a license server?", "solution": "This knowledge base article describes how to use the SLM Configuration Wizard in v11-14 versions connecting to a license server\nNOTE: If you are using an older version (v9 -v10) of the SLM Configuration Wizard, refer to this KB AspenTech: Knowledge Base .\n1. Run the SLM Configuration Wizard.\nOpen your Windows Start menu and launch the aspenONE SLM License Manager with Administrator privileges.\n\n\n\n**NOTE: For some operating systems you may have to execute the aspenONE SLM License Manager using the 'Run as Administrator' option (can also right click on app to choose this option). The application writes entries into the Windows Registry, and Administrator privileges are required for this.**\n\nClick on the configuration wizard icon, that is located in the top left of the screen, to launch the SLM Configuration Wizard\n\n\n\n2. Type the license server name in the \"Server Name or IP\" box and click the \"Add Server\" button.\n\n\n\nThe Server Name or IP should appear below and not give you an error that it did not find it (issue with connection if you do get an error message)\nNOTE: You can add multiple license servers if required.\nAdvanced Settings Configuration Options: (Most leave these options as default)\nFor Expiration Remainder (days), allows you to specify the number of days before a product license expires that a reminder will be sent to you.\nSelect Enable Broadcasting, specify the Interval (min) if you want the SLM to broadcast (search) for licenses on the network.\nLog time zone information: logs time zone information in the server log.\nIgnore local keys: Ignores the local license keys.\nSearch Configured Servers for all available buckets at Runtime: Indicates whether you do or do not want to search all configured servers for all available buckets.\nLog IP addresses: Logs the IP address in the server log. The log file is saved on the SLM server in the same directory where the license server is installed.\nResolve Server Name: This will try to auto fill the entire server name or put in the server name if you input an IP address. **If the IP address or server is autocorrected to the wrong one than the one you inputted, try turning off this function and try again**\n\nThe order of Servers can be selected by using the \"Up\" and \"Down\" buttons\n \nClick the \"Show Buckets\" button to verify the desired or required \"default\" bucket(s) are checked.\n\n \n\n3. Click the \"Apply Changes\" button to initialize the license and complete the process.\n\n\n\n4. Finally, click \"Close\" button and open AspenTech products to test the license connection.", "keywords": "SLM\naspenONE SLM License Manager\nSLM Configuration Wizard\nNetwork License\nStandalone License\nLicense Server\nSLM Tool", "reference": null}, {"id": "000098788", "problem_statement": "Servers commonly used in an APC installation have the following classes of software installed:\nSLM (Software License Manager) License Server: runs the Aspen network license software\nAPC Online Server: runs the online applications including DMC3/DMCplus controllers and IQ applications\nAPC Web Server: runs the Production Control Web Server (PCWS), also includes Aspen Local Security \nAPC Performance Monitor (Aspen Watch) Server: runs Aspen Watch (AW) data collection and holds the InfoPlus.21 database\nCim-IO Server (or Cim-IO core): communication interface for reading/writing data from/to DCS and among Aspen products \n\nThis article provides guidelines on the Optimal Configuration of an APC Architecture.\n\nNote that the APC Desktop class (includes offline tools like DMC3 Builder) can be installed on any server. They are automatically installed with the APC Online and Watch servers.", "solution": "The ideal recommendation is to have individual dedicated servers for the license server and each of the three APC servers.\n\nThe main reasons for this are:\nPerformance - each class of software takes up network, CPU and Memory resources, which can directly affect the performance of controller execution and data collection if there is a limitation. For example, the Online server will take up more resources as you add more or larger controllers and the Watch server is resource intensive as it holds the IP.21 database.\nMaintenance - if there is an issue on one of the products, or a patch is required to be applied for one product, the other products will be directly affected during this transition as well. You may need to stop the controllers (undesirable for operators), stop data collection, shut down the web, etc or reboot the machine during maintenance processes and thus the entire system will be upset. This will also require more precautions when making changes on the system for one product, so as not to affect others. If all servers are separated, a problem on one can be handled without affecting the others and downtime can be avoided whenever possible. \n\nHowever, if this is not feasible, the following guidelines can be used to determine which servers to combine. \n\nPlease refer to the APC Installation Guide for detailed guidelines on whether multiple servers are required for each one, and memory / CPU allocation, depending on number/size of applications and number of users. See sections \"System Requirements\" under Preparing to Install and \"Appendix C: Deployment Scenarios\".\n\nAs a reference, APC V14.2 Installation Guide can be found here: AspenTech: Knowledge Base \nHardware platform specifications can be found here: Platform Support | AspenTech\n\n\nSLM License Server\nMost sites use a network license server, which requires online products and each user session to be able to access the running license server that has a valid license file. Thus, the license server needs to run continuously without interruptions or network failures. \nIf it is not feasible to have a dedicated license server, then it may be installed on the APC Online server. \n\nAPC Online Server\nFor versions prior to V14.0, a memory limitation of 2.6 GB exists for the RTEService.exe (service required for running RTE-based controllers, i.e. deployed using DMC3 Builder, and sending data to Aspen Watch). Exceeding this limit can cause system instability so more Online servers should be added to distribute the load of the applications if needed. \nFor versions V14.2 and above, this limitation no longer exists, and the system can take advantage of a bigger amount of memory due to its new 64-bit architecture.\n\nAPC Web Server (PCWS)\nIf not feasible to have a dedicated web server, then it may be installed on the APC Online server. However, when both are installed on the same computer, the server should have proportionally more memory and virtual processors. \nFor versions prior to V14.0, a memory limitation of 2.6 GB exists for the WebDataProdviderSvc.exe. Exceeding this limit can cause instability so more Web servers should be added in this case. A single Web server should be limited to 30 users. \nFor versions V14.2 and above, this limitation no longer exists, and the system can take advantage of a bigger amount of memory due to its new 64-bit architecture.\nAPC Web installation automatically installs Aspen Local Security with it, so it is recommended to use this as the Security server as well, which hosts the permissions for users and roles. \n\nAPC Performance Monitor (Aspen Watch) Server \nAPC Performance Monitor should always be installed on a dedicated server machine due to the resource intensive InfoPlus.21 database and corresponding running tasks. \n\nCim-IO Server\nBest practice is to install the Aspen Cim-IO server on the third-party OPC server as this will make it easy to pass DCOM permissions and configuration settings. \nIf this is not viable due to restrictions by the OPC vendor, Cim-IO may be installed on a dedicated server. \nIf this is also not possible, then Cim-IO may be installed on the APC Online server. \nFor more information, see KB 000049608 - Where should I install the Cim-IO Server?\nNote that when installing the APC Online and Aspen Watch server software, the Cim-IO client components are installed automatically \n\nThese are the recommendations for the APC architecture and supporting products, based on common and tested deployment scenarios to help avoid performance issues. The final decision for configuration should be up to the user based on business needs and resource allocation, as long as the guidelines provided in the APC Installation Guide are followed.", "keywords": "APC, servers, architecture, online, deployment, recommendation, cimio, watch, web, pcws", "reference": null}, {"id": "000101830", "problem_statement": "How to get rid of the following error message \"Error HRESULT E_FAIL has been returned from a call to a COM component\" while running the Aspen OnLine model?", "solution": "In order to get rid of the following error message \"Error HRESULT E_FAIL has been returned from a call to a COM component\":\n\nYou will need to stop the services and restart the server which will lead to successful results.", "keywords": "Error, HRESULT E_FAIL, Aspen OnLine", "reference": null}, {"id": "000101811", "problem_statement": "After a recent update to Google Chrome and Microsoft Edge, these browsers are no longer accepting self-signed certificates generated by the default process in IIS. This is because the X.509 key usage extension is now required for RSA certificates chaining to local roots. A self-signed certificate with this property can be generated using PowerShell.", "solution": "Open PowerShell as an administrator\nRun the following command\nNew-SelfSignedCertificate -FriendlyName \"\" -DnsName -CertStoreLocation \"cert:\\LocalMachine\\My\" -KeyUsage DigitalSignature\nReplace with a name of your choice for the certificate\nReplace with the FQDN of the Alert Manager server, or the Alert Manager base URL if you are using an alias\nSee this link for more information on additional options while creating your certificate: https://learn.microsoft.com/en-us/powershell/module/pki/new-selfsignedcertificate?view=windowsserver2022-ps\nIn the Start menu, search for and open certlm.msc\nExpand Personal and click on Certificates\nIdentify your newly created certificate by its Friendly Name, right click on it, and copy it\nExpand Trusted Root Certification Authorities and click on Certificates\nPaste your certificate into this folder\nOpen IIS, expand Sites, right click on Default Web Site, and Edit Bindings\nEdit any https bindings used by MAM and switch the SSL Certificate to the one you created in step 2\nNavigate back to the Alert Manager web page. The page should now load.", "keywords": "MAM Certificate Problem\nHTTPS Error\nERR_SSL_KEY_USAGE_INCOMPATIBLE", "reference": null}, {"id": "000101817", "problem_statement": "Is it possible to call python from an Aspen Plus user routine?", "solution": "It is possible to call python from the Fortran subroutine.\n\nPython will need to be installed and added to the environment variables (in the Advanced Option in the python for windows install).\nThe Fortran subroutine should still be compiled in the Customize Aspen Plus window using aspcomp.\nSince python is technically a scripting language, not a programming language that you have to compile before use, it will in a sense \u201ccompile on the fly\u201d. So that a Fortran system call to Python will be able to trigger the script in a Fortran file. It is not possible to call a python function since the Fortran compiler will not recognize python.\n\nIn a Fortran wrapper, it is possible to use CALL SYSTEM.\n\nE.g.\nCALL SYSTEM(\"python.exe mypython.py\")\n\nThe Fortran to python data transfer is file based rather than via function arguments. This means that the python script will read in the data via a Fortran written file for \u201cinput\u201d and write out an \u201coutput\u201d file that Fortran can then read. \n\nFor information:\nhttps://community.intel.com/t5/Intel-Fortran-Compiler/call-windows-system-command-through-fortran-without-showing-the/td-p/1127615\n\nAttached is a very simple example where a python script is used to read in a file written by Fortran and write out results to a file. \n\nIn the Fortran file tpakin.f, after the Fortran code is used to calculate the rates, python is used to get that information and write some results as an illustration that it is possible to use python in a Fortran subroutine.\n\nIn the Fortran code:\nCALL SYSTEM('python cstr.py -f=inputs.csv')\n In the Python code:\nimport csv\nif __name__ == \"__main__\":\n import argparse\n\n parser = argparse.ArgumentParser(description = \"read in input.csv\")\n parser.add_argument(\"-f\", \"--file\", dest = \"filename\",\n help = \"input file\", metavar=\"FILE\")\n\n args = parser.parse_args()\n input_file_name = args.filename\n output_file_name = 'result.csv'\n\n# Read the input CSV file\n with open(input_file_name, mode='r') as input_file:\n# Create a CSV reader object\n reader = csv.reader(input_file)\n\n# Convert the second row values to floats and add 1.0 to each value\n first_row = next(reader)\n modified_row = [float(value) *0.8 for value in first_row ]\n\n# Write the modified values to the output CSV file\n with open(output_file_name, mode='w', newline='') as output_file:\n # Create a CSV writer object\n writer = csv.writer(output_file)\n\n # Write the modified row to the output CSV file\n writer.writerow(modified_row)\n\nprint(f\"Result has been written to '{output_file_name}'.\")", "keywords": null, "reference": null}, {"id": "000101427", "problem_statement": "How can you maximize the Report Editor every time the project is evaluated?", "solution": "The default way that ACCE opens de CCP Report, it looks like this:\n \n\n\nTo change this default and open the CCP report in maximized view, enable the \u201cOpen Maximized\u201d option through Options / Preferences in the General and Document tabs, as the following images show:\n\n\n\n\nOnce these options are enabled, each time you evaluate the project and open the CCP, it will be maximized:", "keywords": "CCP Report, Open Maximized, Report Editor. Full Screen, Bigger", "reference": null}, {"id": "000100851", "problem_statement": "Why can't I run excel reports and I get a message related to \"failed to pass security check\"?", "solution": "Step 1: Select Default Apps:\n \nStep 2: click on \"Choose apps by file type\"\n Now, make sure Excel is the default application for \".xlsm\" file type as shown in the picture:\n\n\n\nIf Excel is not the default application, then select the application and choose Excel:\n\n\n\nAfter performing the above workflow, ACCE should be able to launch any kind of Excel Report successfully.", "keywords": "Excel Reports, Default App, Pass Security", "reference": null}, {"id": "000101009", "problem_statement": "Icarus User 140 database recreation", "solution": "The Icarus_User140 database is not successfully connected, so the reporter does not work, and in some cases, the following error message appears when evaluating the project:\n\n \n\nThis issue can be fixed by repairing SQL LocalDB instance.\n\nFor that, once ACCE is open users should click on\nTools\nPreferences\nReporting and click on \u201cRepair SQL Local DB Instance\u201d,\nClick on Apply\n\n\nOnce this is done, the repair is automatic.", "keywords": "Database recreation, Icarus_User140, Repair SQL LocalDB, LocalDB Instance", "reference": null}, {"id": "000100882", "problem_statement": "How to import Areas and components from other scenarios?", "solution": "Once the project is created, it is not possible to change the template being used in that project.\n\nNow, if we want to use a different template in a project, we need to create a new project using a new template and then import the old project into the new one.\n\nThe steps are shown below:\nCreate a new project using the new template.\nOpen the new project, then click on project view, so that you can see all areas and components.\n\n\n In the Palette section, click on the Projects tab.\n\n Here we must find the project and scenario of the previous project in the Projects tab.\nClick on the Projects tab to load it, thus, it will show all the areas and components of the project.\nWe will drag and drop the Stage onto the Project View in the new project.\n\n\n\nIt is important to mention that the objects from the previous project will be imported into the new project and will use the settings of the new template.", "keywords": "Import Template, Projects Tab, Template", "reference": null}, {"id": "000100857", "problem_statement": "What Cost Basis is used in Economic Suite V14?", "solution": "The Cost Basis for the V14 Economic Suite (Aspen Capital Cost Estimator, Aspen Process Economic Analyzer, Activated Economics and Aspen In-Plant Cost Estimator,) is the First Quarter 2022 (Q1 2022).", "keywords": "Cost Basis, Cost, Recent Update, 2022 Cost Basis", "reference": null}, {"id": "000101402", "problem_statement": "How can I access Aspen Economic Evaluation V14 Printed documentation?", "solution": "When you open ACCE V14 you can access the printed documentation if you go to the Ribbon/Help/Documentation (as in the following image)\n \nWhen you click on \u201cDocumentation\u201d the next window opens:\n \nNote: It is important that you have installed Adobe Reader to open all of them.\n\nNow, if you are not able to open it with Adobe Reader, the option is to copy and paste the following path into the file explorer:\n\nfile:///C:/Program%20Files/AspenTech/Economic%20Evaluation%20V14.0/Program/Docs\n\n\nOnce you finished, you have access to this Printed Documentation:", "keywords": "Documentation, User Guide, Printed Documentation, V14 Documentation, User Guide, Piping & Instrumentation Drawings", "reference": null}, {"id": "000101823", "problem_statement": "APC Gateway introduces a streamlined communication pathway between Aspen DMC3 controllers and other Aspen APC products, such as Aspen IQ and Aspen GDOT applications, eliminating the need for intermediate points like DCS or Aspen InfoPlus.21. More information about this feature and the configuration guide can be found on KB 000101374 .\n\nWhile configuring the APC Gateway in the Configure Online Server, users may encounter the error message:\n\n\"IO Source xxxxx already exists. Please enter a different application name\"", "solution": "Although the obvious solution is to choose a different name for the APC Gateway source, there are instances where this error persists even when no source shares the same name. This issue typically arises due to incomplete deletion of records after removing an APC Gateway IO source. Consequently, Configure Online Server blocks the reuse of previously established source names.\n\nTo address this:\n Review Configuration Files:\nOpen Notepad with administrative privileges and navigate to the following directory:\nC:\\ProgramData\\AspenTech\\RTE\\V14\\Config\n Modify AspenTech.ACP.IO.config:\nLocate and open the AspenTech.ACP.IO.config file. Remove the entry corresponding to the corrupted source. For example, if the corrupted source is named \"SSCOMP,\" delete the highlighted rows in the config file:\n\n\n Modify AspenTech.ACP.IOLogging.Config:\nSimilarly, open the AspenTech.ACP.IOLogging.Config file and delete the entry associated with the corrupted source.\n\n\n Save Changes:\nSave the edited configuration files.\n Recreate APC Gateway:\nAfter saving the changes, attempt to recreate the APC Gateway using the previously problematic name. The system should now allow the use of the desired name without encountering the error.\n\nBy following these steps, users can effectively resolve the \"IO Source xxxxx already exists\" error encountered during the Aspen APC Gateway configuration.", "keywords": "APCGateway, IO Source error, Application name conflict", "reference": null}, {"id": "000101658", "problem_statement": "How to Transfer information from an Aspen Plus simulation that uses non-conventional components to Aspen Basic Engineering?\nWe need to take into account that non-conventional components have very few properties, if any, in comparison with pure components.", "solution": "Using simulations that have non-conventional components (Aspen Plus) will have an impact on the attributes sent to ABE. This image shows the difference between the same default set of properties for a simulation with non-conventional components (left) and a simulation without non-conventional components:\n \nFor the bulk flow, molar properties will not be calculated as the non-conventional components have very few properties if any. If a simulation with non-conventional components is transferred to ABE, you will see that not all attributes in the Bulk Flow will be populated as they are not available in Aspen Plus.\n \nOne issue that will be seen is that the Components node is not created when using the Component Manager. This component node is used in a lot of labels in Drawing Editor and Templates for Datasheets. A workaround is to change those templates to point to the PureComponents node in the streams. However, it will not show all expected data as it was not generated by the simulator. The recommendation is to use pseudo components that will have the properties that are needed for the simulator to perform all calculations.", "keywords": "Transfer, ABE, Non-conventional components, Bulk Properties", "reference": null}, {"id": "000101657", "problem_statement": "How to Unlink a Diagram from Document Set in Aspen Basic Engineering if we mistakenly create a Document Set?\nThis should only be used for Document Sets created by mistake. If used with Document Sets the diagrams that it contains will still share the same topology but will not be in the same Folder.", "solution": "To create a Document Set, the user can select a PFD and use it as a basis for creating several other diagrams such as Material Selection Diagrams (MSD), Pressure Safety Diagrams (PSD), etc. If a user mistakenly creates a Document Set for a diagram, these are the steps to unlink the diagram to the Document Set:\nCreate a Folder in the Explorer application and add the object \u201cDocument Set\u201d that should be unlinked.\nOnce the Document Set object is added, double click on it, or right click and select Attributes. Select the diagram that was used to create the Document Set. Right click on the node and Remove the Item.\nIn Drawing Editor the PFD will be out of the Document Set, the Document Set folder can be deleted as it is now empty.\nThis should only be used for Document Sets created by mistake. If used with Document Sets the diagrams that it contains will still share the same topology but will not be in the same Folder.", "keywords": "Document Set, Aspen Basic Engineering, Drawing Editor, Unlink, Diagram", "reference": null}, {"id": "000101792", "problem_statement": "What are the new tables or changed tables in V14 APS as compared to V12 APS databases?", "solution": "Modified Tables and their corresponding modifications:\nTable _EVENTS_BATCHES\nModification- Added Column [ROUTE_ID]\nTable _EVENTS_BATCHES_MBO\nModification- Added Column [ROUTE_ID]\nTable GANTT_EXPANSION_RECON\nModification- Column 3-X_SEQ dataType changed from [INT32] to [AUTONUMBER]\nTable TANK_INFEAS\nModification- Added Columns: [EXCESS_TARGET_B]; [EXCESS_TARGET_E]; [DEFIC_TARGET_B]; [DEFIC_TARGET_E]\nTable TRAN_MODES\nModification- Added Columns: [DELAY_TIME]; [DT_UOM]\nTable VOYAGEDOCKS\nModification- Added Column: [LOCKED]\n\nNew Tables:\nTable _SCHEDULE_OBJECTIVES\nTable _TDINV_COMP\nTable _TIMEDELAY_INV\nTable _ZSCHEDULE_OBJECTIVES\nTable ATORIONSCHEDOBJECTIVEDETAILS\nTable ATORIONSCHEDOBJECTIVESDEF\nTable TDINV_COMP\nTable TIMEDELAY_INV\nTable VOYAGE_TEMPLATES\nTable VOYAGE_TEMPLATES_VERSION_INFO", "keywords": null, "reference": null}, {"id": "000101814", "problem_statement": "My simulation (using user subroutines) is stopping after 100 fortran errors (division by zero). How can I find where the errors are occuring?", "solution": "Aspen Plus sets the floating point exception flags to detect the error but continue the calculations until the flags signalling the errors are checked. This means it's not possible to find where the error is coming from.\n\nThere are three methods to catch floating point exception in user subroutine code.\n\nThe first method is to carefully review the code line by line and check for operations which may cause exceptions, such as division by zero, log of zero or negative, square root of negative value. Other functions which may raise errors are exponential, hyperbolic functions, trigonometry function. For example if you try:\n DOUBLE PRECISION X, Y\n ...\n Y = 1d0/X\n\nThis will raise a division by zero error. If you know for sure that X will never be zero, then you don't need to do anything. If on the other hand, you know X might be zero, then you need to review your design document and figure out what to do:\n- stop the calculations and tell the user the data are not valid\n- just make X a small non-zero value and continue (warning the user if appropriate)\n\n\nThe second method is to call the floating point error checking subroutine in various places of your subroutine. This can be done by adding calls to DMS_CCKFPE.\n CALL DMS_CCKFPE(1)\n Y = 1d0 / X\n CALL DMS_CCKFPE(2)\n ...\n CALL DMS_CCKFPE(3)\n ...\n CALL DMS_CCKFPE(4)\n ...\n\nThe integer argument can be any value you want, and this value will be reported as well as the error status in the history file. This can help identify the section of the code where the error is triggered. Once you've fixed the error, you can remove those calls. If your user subroutines will be used by other people, you could also consider leaving the calls to DMS_CCKFPE in place and use one integer variable in user input to turn on/off the checking.\n\nThe third method is to run the user code with Visual Studio debugger. Review the Aspen Plus System Management reference documentation for details.\n\nIn a nutshell:\n- compile the user subroutine with command:\nASPCOMP *.f DBG\n- create a user.dll with the command:\nASPLINK DEBUG user\n- edit the dynamic linking option file (dlopt) to refer to user.dll\n- start Aspen Plus, open the simulation file\n- start Visual Studio, create a new project with no code\n- go to Debug menu, Attach to Process, select apmain.exe, type \"native\"\n- in Aspen Plus, Flowsheeting Options, Add Input, add the following:\nDEBUG DYNLINK=2 FPCONTROL=1\n- run the simulation\n\nThe debugger should break on the line where the evaluation error is raised. This is the most elegant method because you can also inspect the value of all variables, but it requires to not be afraid of Visual Studio.", "keywords": "debug, fortran, division by zero", "reference": null}, {"id": "000101815", "problem_statement": "Is there a way to flush any pending message about floating point errors in user subroutine?", "solution": "Use DMS_CCKFPE Floating Point Error Utility subroutine.\n\nUse this subroutine to help debug where floating point errors are being produced in user Fortran code. When the utility is called it reports the integer passed as an argument, to help distinguish different calls of the function, and flushes any pending messages about floating point errors, such as division by zero and functions with invalid numerical arguments.\n\nCalling Sequence for DMS_CCKFPE\nCALL DMS_CCKFPE(ICODE)\nVariable I/O Type Dimension Description\nICODE I INTEGER - Integer value reported in history file to\nsignal that the utility has been called\nExample of Calling DMS_CCKFPE in User Routine\nCALL DMS_CCKFPE(1)\nC ...\nC calculations with division by zero\nC ...\n CALL DMS_CCKFPE(2)\nC ....\nC calculations with no floating point exception\nC ...\n CALL DMS_CCKFPE(3)\nThe output in the history file from the following case may look like:\nCALLING ICKFPE AT 1\n CALLED ICKFPE AT 1\n CALLING ICKFPE AT 2\n *** SEVERE ERROR WHILE EXECUTING UNIT OPERATIONS BLOCK: \"XXX\" IN \"YYY\"\n (MODEL: \"RPLUG\") (FPEPRT.1)\n FORTRAN DIVIDE BY ZERO ENCOUNTERED.\n CALLED ICKFPE AT 2\n CALLING ICKFPE AT 3\n CALLED ICKFPE AT 3\nThis identifies which section of the code is causing the Fortran exception.\n\nMore skilled users may be able to use the debugger in Visual Studio to catch such errors, but since this involves compiling and linking the code with debug options, attaching the Visual Studio debugger to Aspen Plus, and adding debug options to the Aspen Plus model using Add Input, this provides a method with less overhead to perform simple debugging.", "keywords": "Division by Zero, VSTS 1325710", "reference": null}, {"id": "000098026", "problem_statement": "Is there a way to disable the \u201cVisible\u201d feature in Aspen Simulation Workbook (ASW) installed on a server, so that the user cannot see Aspen Plus/HYSYS while running the simulation from ASW?", "solution": "There is a way to disable the \u201cVisible\u201d feature in Aspen Simulation Workbook (ASW) installed on a server, so that the user cannot see Aspen Plus/HYSYS while running the simulation from ASW.\n\n(1) Go to Protect to raise the \u201cRestrict Access\u201d dialog\n\n\n(2) Unhide the option to Show/Hide simulation, (3) set a password", "keywords": "ASW, Visible, Protect, Restrict Access, Show/Hide", "reference": null}, {"id": "000101809", "problem_statement": "When setting up remote OPC connections on GDOT, the causes of failure usually fall under one of these categories: server time settings, server name resolving, tag syntax errors, firewall issues, user authentication issues. We will go through some general steps on how to tackle each of these problems.", "solution": "Here are some troubleshooting steps that can be taken for each of the categories mentioned on the problem statement:\nServer Time Settings\nIt is a common practice but is worth mentioning that when working with several servers that exchange information there should be a Network Time Protocol to make sure that the server clocks are synchronized, as time differences as small as 2-5 minutes can cause issues with the communications, the clock time and timezone of Windows should be the same for both machines, the GDOT server and the OPC server.\n Tag Syntax Errors\nEach OPC has a different syntax for how its tags are read, so when starting on a new project consult the OPC manual to see how the tag addresses should be declared on remote OPC clients, for example when reading values from Aspen InfoPlus.21 as an OPC, this is what the Aspen InfoPlus.21 Product Family Configuration Guide V14 says about using InfoPlus.21 as an OPC-DA server:\n\n\n\nFor example, you would need write the tag on the GDOT Variable Configuration as FC101SP.IP_INPUT_VALUE, any other spaces or punctuation marks will throw a read error.\n\nAlso if you are working with serveral OPC nodes (or OPC + APC Gateway), check the Appendix A: IO Tag Specifications of the V14.0.2 Aspen GDOT User Guide:\n\n\n Server Name Resolving\nThere can be a lot of errors when using a DNS, it could be that the name/IP Address table on the server iteslf is incorrect, that the connection from the GDOT server to the DNS is really slow or failing, that the DNS cache gets corrupted, etc\u2026\n\nIn order to avoid these issues and \u201cbypass\u201d the DNS what can be done is on the hosts file found under C:\\Windows\\System32\\drivers\\etc , declare both the local server name as well as the remote server name and IP address, you can declare both the Fully Qualified Domain Name as well as the short host name to make sure that the running programs will be able to find the IP address when using either one of the server name formats.\n\nExample (from a GDOT server):\n\n\n\nTo see if the GDOT server is able resolve the OPC server IP address by name (and viceversa), a quick test is to try and \u201cping\u201d the remote server using the Command Prompt, even if this feature is not enabled through a firewall and the connection fails, you will still see between \u201c[ ]\u201d if the machine is able to resolve the remote IP address of the host name that you typed.\n Firewall Issues\nWhen setting remote OPC connections the communications are done using Microsoft\u2019s DCOM, which is known for being difficult to configure and troubleshoot. A running GDOT application uses a specific type of OPC(-DA) connection called a subscription, the way that this works is that an initial subscription request will be sent from the GDOT server to the OPC server through port 135 which is defined for COM+ Network Access (DCOM inbound connections), then the OPC server will reply with a callback connection through a different port, this callback port is assigned dynamically so it is hard to predict which will be used, so when working with a firewall it\u2019s not as simple as creating firewall rules such as Allow Connection through Port: 135, xx, yy, etc\u2026 since like it was mentioned the callback ports change.\n\nAs a general statement, it is expected for the GDOT servers to be on the same plant network as the OPC server machine, without any firewalls in between them. If there is a hardware firewall between the two machines then an OPC tunneler is typically required.\n\n User Authentication Issues\nThe final point that will be discussed is user authentication on OPC clients. When setting up an OPC-DA connection the common configuration is for the user account that runs GDOT to also exist as a user on the OPC server machine, this is a DCOM requirements to authenticate that the inbound connection is from an authorized user. Nonetheless, some OPCs require an additional account to exist on the GDOT sever to authenticate, here is an example of an issue with the ABB Maestro OPC where a service account called \u201cPPBService\u201d needed to be configured on the GDOT users for the connection to work:\n\nGDOT OPC connection requirement for ABB Maestro https://esupport.aspentech.com/S_Article?id=000100505\n\nAnother example is that some Honeywell OPCs have an account called \u201cmngr\u201d which needs to exist on both machines for the authentication to succeed, to clarify these are just examples and the required accounts don\u2019t always have these specific \u201cPPBService\u201d and \u201cmngr\u201d names, please review the exact user configuration of your OPC machine or its user manual for more guidance.\n\nThese user authentication errors can be commonly found on the Windows Event Viewer logs, under Windows Logs -> Application/Security/System.\n\nAdditionally, if you want more detailed DCOM logging, what you need to do is on the server Registry, go to Computer\\HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Ole and under this folder create two new DWORD registry keys named ActivationFailureLoggingLevel and CallFailureLoggingLevel:\n\n\n\n\n\nAfter these entries are created, right click to Modify their Value data to 1, with this logging level activated, now you should be able to get more detailed errors on the Event Viewer System log of what is failing during the DCOM connection:\n\n\nThis logging level is very detailed so after your troubleshooting is done, make sure to return the two Registry values back to 0, as this many messages can easily overflow and overwrite your log.\n\n\nIf after reviewing all of these points you still have questions or problems with your GDOT application, please contact AspenTech support team on esupport@aspentech.com .", "keywords": "GDOT, OPC, firewall, authentication, node, host, troubleshoot, error", "reference": null}, {"id": "000097488", "problem_statement": "Is it possible to customize the units of measurement with the export functionality in Aspen Flare System Analyzer (AFSA)?", "solution": "Aspen Flare System Analyzer has the capability of exporting data and results into MS Excel (.xls, .xlsx), MS Access (.mdb) and XML files (File | Export Case In | .mdb / .xls / .xml format). This is useful to import this same information into a new AFSA file and recreate the simulation or merge different AFSA models.\nThis data by default is exported using AFSA\u2019s Metric units of measurement, since this is the unit set that is always read by AFSA upon importing. This default is hard coded into the program and cannot be modified.\nIf the user would rather generate an Excel report with the inputs or results generated on AFSA, then the Excel button on each of the Summary tables in AFSA could prove useful, this copies all the data using the currently selected units of measurement into Excel. This should be repeated on all desired forms. This option does not allow for a reimportation of data into AFSA.", "keywords": "Units, AFSA, Data, Inputs, Results, Tables", "reference": null}, {"id": "000051249", "problem_statement": "What is the Set Version utility and when should I use it?", "solution": "The Set Version tool is a utility that allows you to select which version of a particular product will be set as the \u201cDefault\u201d.\n\nThis is particularly important for 2 reasons:\n\n1. When you have more than one version of the same program installed on your computer. The Set Version tool will determine which version will be used every time you open an existing file. Keep in mind that for most products, if the file is from a more recent version of the software than the one you selected as default, you won't be able to open it (backward compatibility).\n\n2. When you are working with products that are connected or linked to others (what we call Integration); such as using Exchanger Design and Rating (EDR) from within Aspen Plus or Aspen HYSYS, using Aspen Properties inside EDR, or connecting Aspen Simulation Workbook to another program; the best practice is to always use the same version for both programs (EDR and Aspen Properties, Aspen Plus and EDR, etc.), otherwise you may have problems when attempting to connect both applications.\nFor more information on how to run the Set Version utility for EDR, please refer to the KB article below:\nHow to run the EDR Set Version utility?", "keywords": "Set version, link to other programs.", "reference": null}, {"id": "000065635", "problem_statement": "How to run the EDR Set Version utility?", "solution": "The EDR Set Version utility will allow you to select which EDR version will be set as the default, which is important when you also want to use EDR within Aspen Plus or Aspen HYSYS.\n\nBefore running the EDR Set Version tool you need to verify 2 things:\n\n- Administrator Rights. If you do not have Administrator Rights, you will need assistance from your IT group, otherwise the changes you make using the Set Version utility won\u2019t work.\n- Close Excel and any other AspenTech applications (EDR, Aspen Plus, Aspen HYSYS, etc.)\n\nNow, you can follow the steps below:\n\n1. Go to your Start menu and find the folder with all the Aspen EDR applications. This may look different depending on the Windows version you are using. For example, in Windows 10 it will look like this:\n\n\n \n2. Expand this folder. You will see the shortcut to open the EDR application and, below that, the EDR Set Version utility. Please note that if you have several versions installed, you will see different Set Version options as well:\n\n\n \n3. Right-click on the Set Version button of the version you want to set as the default and select Run As Administrator (remember this step is very important):\n\n \n\n4. Click Yes when asked if you want to allow the program to make modifications on your computer. \n\n5. The Set Version window will show up. This window will contain the current default version plus other available versions as well:\n\n\n \n6. Select the version you will set as the default (Make sure to check the box to register that same version with Aspen Plus and Aspen HYSYS if you plan to use EDR within any of these programs):\n\n\n\n7. Finally, click on Set at the bottom of the window. Once you do the program will display a message when the registry process is complete:\n\n\n\n \n\n8. You can now click on Exit and proceed to use EDR normally.\n\nFor more information about the Set Version utility, please refer to the KB article below:\n\nWhat is the Set Version utility and when should I use it?", "keywords": "Set version, EDR", "reference": null}, {"id": "000088392", "problem_statement": "How can I change the atmospheric (barometric) pressure?", "solution": "If you are interested in creating custom pressure units which display guage pressures relative to an atmospheric pressure which is different than the standard 1 atm (~14.696 psia or 101.325 kPa), you can follow the next steps:\nGo to File | Options | Units of Measure\nClone an existing unit set\nHighlight pressure units and select \"Add\"\nFor atmospheric pressure of say 12 psia, psig_user*= 0.1450 *kPa - 12\nNote that this custom unit will be used for the display of pressures only. Calculations of properties at standard conditions in Aspen HYSYS will not change. (For example, heats of formation which have a basis of 25C and 1 atm - our database will not adjust these. Also, densities calculated at standard conditions of 15 C and 1 atm will not change.)\n\n\nAlternatively, starting with Aspen HYSYS V10, you can directly change the atmospheric pressure for the simulation in the Preferences. \nGo to File | Options | Units Of Measure, scroll down until \"Ambient Pressure Setting\" and there the pressure can be edited. \n \n\nImportant note: When this is changed, everytime Aspen HYSYS is open, the pressure will be the one specified, not the default. For reverting to the default, input the default value: 101.325 kPa.", "keywords": "Atmospheric Pressure Setting, Barometric, Change.", "reference": null}, {"id": "000101808", "problem_statement": "What is the difference between different salts with the same formula in Aspen Plus? For example, in the INORGANIC database there are two options to choose from for the component silver nitrate AgNO3, SOL-A and SOL-B.", "solution": "It is possible for some solids to have different crystal structures which have different properties. These different crystal structures are often available in the INORGANIC (Barin) databank. The INORGANIC databank generally has accurate parameters for inorganic components in all phases.\n\nSILVER-NITRATE:SOL-A (AGNO3:A) and SILVER-NITRATE:SOL-B (AGNO3:A) are examples of two different solid crystal structures of silver nitrate.\n\nThe INORGANIC databank uses the CPpXPn polynomials to calculate all of the thermodynamic properties where p is the phase and n is the set number. The four properties Cp, H, S, and G are interrelated as a result of the thermodynamic relationships. These are analytical relationships between the expressions Cp, H, S, and G use the elements of the CPpXPn parameters. They do not need heat or enthalpy of formation parameters (e.g. DHFORM or DHSFRM). The first two elements are the temperature range. There can be more than one set of parameters for a phase for different temperature ranges. See the help for details of the Barin Equations for Gibbs Energy, Enthalpy, Entropy, and Heat Capacity.\n\nUsing retrieve the parameters for all of the forms of silver-nitrate (AGNO3, SOL-A, and SOL-B), one can see that the liquid CPLXP1 parameters are the same for all of these components. The CPSXP1 parameters for SOL-1 are from 25 to 159 C and the SOL-2 parameters are from 159 to 209 C. The one that does not have a crystal structure designated has a CPSXP1 and CPSXP2 to cover both ranges.", "keywords": null, "reference": null}, {"id": "000100678", "problem_statement": "This Knowledge Base article illustrates how to identify if the IP.21 installed is 32 bit or 64 bit?", "solution": "Many customers ask how to identify whether the IP.21 installed is 32 bit or 64 bit?\n\nTo identify the which bitness of IP.21 is installed, there are different methods:\nThrough Task Manager:\nOpen Windows Task Manager\nSelect Details tab in Windows Task Manager.\nRight-click on column name heading such Name, PID, etc.\nSelect \"Select columns\" from the context menu.\n e. Tick checkbox beside Platform in the \"Select columns\" dialog box.\n\n\n f. Look for dbclock.exe and check whether Platform is showing 32-bit or 64-bit.\n 2. Through aspenONE Diagnostics:\nThe application and version will display on the Select Products screen. If the application is 64 bit it will show (64-bit)", "keywords": "64-bit, 32-bit, IP.21, bitness, InfoPlus.21", "reference": null}, {"id": "000101505", "problem_statement": "When trying to run a simulation in Aspen Fidelis, you may get the following error and be unable to run the simulation.\n\u201cEndOfTime type not found.\u201d\nEndOfTime could also be replaced with any of the following: SimStart, TimeZero, SimEvent, or SimEnd.", "solution": "This error occurs when the EndOfTime sub is missing in VSTA. You can also get a similar error message if a different sub is missing. Follow the steps below to add the sub back and resolve the error.\n Open the file with the error in Aspen Fidelis\nClick on the Write Key Routines button\nScroll through VSTA. Fidelis expects 5 subs, SimStart, TimeZero, SimEvent, EndOfTime, and SimEnd. Confirm that the sub referenced in the error message is missing.\nAdd the sub to the file. You can do this by writing:\nPublic Shared Sub EndOfTime(data As SimulationData)\nEnd Sub\nReplace EndOfTime with the name of the missing sub.\nSave your changes in VSTA\nClose VSTA and wait for Fidelis to save your changes\nTry to run the simulation again. You should no longer get the error.", "keywords": "Fidelis VSTA\nSimulation Error\nSimStart\nTimeZero\nSimEvent\nEndOfTime\nSimEnd", "reference": null}, {"id": "000101300", "problem_statement": "When attempting to train an agent in Mtell, you immediately get the following error:\n\n\u201cError Training Agent:\nrowIndex\u201d\n\nThis error can indicate a problem with the offline condition.", "solution": "Open Aspen Mtell Agent Builder\nClick on the TDS with the agent that will not train\nLocate the offline condition. A correctly set offline condition will have the word sensor before the sensor role. If your offline condition is missing the word sensor, that will cause this error. It is possible that other formatting discrepancies could cause this error as well. See the pictures below for examples of a correctly and incorrectly set offline condition.\n\nThis offline condition has been set correctly because it has the word Sensor:: before the sensor role (\u201cMotor Amps\u201d)\n This offline condition has been set incorrectly because Sensor:: is missing before \u201cMotor Amps\u201d\nTo correct the offline condition, click on the equipment set\nSwitch to the Offline Conditions tab\nClick Define\u2026\nUse the Insert Role button to select any sensor roles used in your offline condition. They should populate with the word sensor in front of the role.\nClick OK\nClick Apply and then Yes to override the existing offline conditions\nYou should now be able to train your agents without getting the rowIndex error", "keywords": "Error training agent\nrowIndex\nOffline condition", "reference": null}, {"id": "000101311", "problem_statement": "When opening Aspen Mtell System Manager, users may receive an error\nConnection to MIS Web Service [http://servername/AspenTech/AspenMtell/InteropServer/MIMOSA/OSAEAIManagement.asmx] failed.\n[Error: The remote server returned an error: ]\nCheck IIS Manager to verify that ASP.Net v4 is enabled under ISAPI and CGI Restrictions\n\nThis error message can have several different causes. This article is a directory of the solutions for the different root causes of this error.", "solution": "First, confirm that the correct server name has been specified in System Manager. \nOpen System Manager and navigate to the Configuration tab\nSelect Settings and then General\nConfirm that the Server Name field is correctly filled out with the name of the Mtell server. Correct the Server Name and save your changes if not.\n\nIf the Server Name is correct and you still receive an error, the error number will help narrow down the cause. Try the solution listed under the error number you receive.\n\n500\nTo narrow down the root cause of a (500) Internal Server Error, open a web browser and navigate to http://servername/AspenTech/AspenMtell/InteropServer/MIMOSA. Replace servername with the name of your Mtell server. Find the error message that matches what you see, and follow the instructions in the linked KB article for that message.\n Could not load type \u2018System.ServiceModel.Activation.HttpModule\u2019 from assembly \u2018System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089\u2019.\nFollow the steps in: Mtell View giving \"Could not load type 'System.ServiceModel.Activation.HttpModule' from assembly 'System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.\" \n\n Login failed for user \u2018domain\\servername$\u2019.\nFollow the steps in: Using Windows Authentication in SQL Causes \u201cConnection to MIS Web Service failed\u201d Message in Aspen Mtell System Manager\n\n HTTP Error 500.24 \u2013 Internal Server Error\nFollow the steps in: Aspen Mtell System Manager gives Connection to MIS Web Service failed error \n\n Security Exception or Debugging is not supported under current trust level settings\nFollow the steps in: Server Error in Mtell IIS Pages and Adapters\n\n401\nFollow the steps in: \"Error: The remote server returned an error: (401) Unauthorized\" when launching Aspen Mtell System Manager\n\n403\nFollow the steps in: \"Error: The remote server returned an error: (403) Forbidden\" when launching Aspen Mtell System Manager", "keywords": "MIS Web Service\nServer Error", "reference": null}, {"id": "000101797", "problem_statement": "When using Aspen Mtell Alert Manager, you may encounter a scenario where the Dashboard loads fine, but you get a 500 error when trying to navigate to an issue page.", "solution": "Navigate to the folder C:\\Program Files\\AspenTech\\Aspen Mtell Alert Manager\\Logs\nOpen the most recent MAMDataProviderLog file, and check if there are any errors related to redis. You may find messages such as:\nStackExchange.Redis.RedisConnectionException: It was not possible to connect to the redis server(s)\nStackExchange.Redis.RedisConnectionException: No connection is active/available to service this operation\nOpen the Windows Services console and confirm that Redis is running. If it is off, try to start it. If it is on, restart it.\nOpen Alert Manager and check if the error persists.\nIf the error is not resolved, open a command prompt and run the following commands:\nredis-cli -h 127.0.0.1 -p 6379\nping\nThe ping command will likely give you the following error: \u201c(error) NOAUTH Authentication required.\u201d If so, redis has been configured to require a password when it should not be.\nTo remove the password requirement, go to C:\\Program Files\\Redis\\ and open redis.windows-service.conf in a text editor, such as Notepad.\nSearch for a line that begins with \u201crequirepass\u201d\nComment this line out by adding a # at the beginning\nSave the file. It may be necessary to save to another location, like your Desktop, and copy the file back into C:\\Program Files\\Redis\\. Make sure you save with the .conf extension, and not the .txt extension.\nRestart Redis again in Windows Services\nNavigate to an issue page in Alert Manager. The issue should be resolved.", "keywords": "Alert Manager error\nCannot view alerts\nCannot view issues\nCannot view trends\nRedis error", "reference": null}, {"id": "000101699", "problem_statement": "When launching Aspen Mtell System Manager, the following error appears.\nConnection to MIS Web Service failed. [Error: The remote server returned an error: (403) Forbidden]\n\nBy navigating to http://localhost/AspenTech/AspenMtell/InteropServer/MIMOSA/ on the Mtell server, you can see a more detailed error.\nHTTP Error 403.1 - Forbidden\nYou have attempted to run a CGI, ISAPI, or other executable program from a directory that does not allow executables to run.", "solution": "This article outlines the instruction to address IIS Error message 403.1 \u2013 Forbidden. This error can result from certain Handler Mappings being disabled in IIS. Follow the below steps to re-enable them.\nOpen IIS Manager\nIn the Connections page on the left, expand the server name, Sites, Default Website, AspenTech, Aspen Mtell, InteropServer\nStarting at Default Website and working your way down to MIMOSA, click on each level of the hierarchy and open Handler Mappings\nIf the Handler Mappings are mostly enabled, as in the picture below, proceed to the next level of the hierarchy\nIf the Handler Mappings are mostly disabled, as in the picture below, click on Revert to Parent\nClick Yes to continue\nYou should no longer get the error message upon opening System Manager", "keywords": "MIS Web Service Error\nForbidden\nIIS Error\n403\n403.1", "reference": null}, {"id": "000082386", "problem_statement": "How to use the SLM Commute Tool for V14.", "solution": "The Aspen SLM Commute (SLMCommute.exe) allows the user/client to borrow licenses from a network server. These borrowed, or commuted, licenses allow a client computer to run the licensed product while disconnected from the network without the use of a SLM dongle. \n\nNote: You must run Aspen SLM Commute when connected to the License Server network to obtain and verify the licenses required.\n\nThe commuted time is specified in days, with a maximum of 30 days. The licenses can be returned prior to their expiration date. In order to successfully commute a license, the commutable feature must be activated in the license file.\n\nTo use the SLM Commute tool:\nFrom the Start menu, select aspenONE SLM License Manager\nClick on Commute to launch the commute tool\nThe license server that was configured in the SLM Configuration Wizard should be listed under the SLM Server(s) column.\nYou can view licenses by product or by server.\nClick the Licenses by Products tab. You can select one or more licenses under the product.\nClick the License By Server tab. You choose one or more licenses and the number of licenses you want to check out.\nIn the Days to check out license(s) from server field, enter the number of days you require the license(s).The number of days can be any integer from 1 through 30.\nStarting in V9, you can now also select the number of tokens to commute based on the licenses chosen.\nMore tokens can be commuted if you plan on using more than one instance at a time. Ex: HYSYS requires 14 tokens for one instance. To open another instance at the same time you will need a total of 28 tokens\nThen click Commute\nIf the commute is successful, SLMCommute displays the list of commuted licenses. The license or licenses are temporarily released from the server to your hard drive. You can now exit the tool and run your licensed product away from the network.\n \nThe license will expire automatically at midnight on the last day of your license period. You can return the license before the expiry date. \n\nTo return the commuted licenses:\nSelect the licenses you will be returning.\nClick Return or Return All\n\nRecommended Practices\nMake a note of the server names or IP addresses.\nAlways check licenses back in when you reconnect to the network.\nTake the licenses only for the period that you require.\nDo not take any more licenses than you need.\nTo maximize the efficiency of your network licenses when commuting\nTip: If you find SLM Commute is slow, open a licensed product before running\nSLM Commute.", "keywords": "SLM Commute, license, checkout, expire, network, server, tools, V9, V10", "reference": null}, {"id": "000101798", "problem_statement": "Error pages display identifying information of server type and version on Tomcat. Knowing server version and type can be used by attackers to identify and target potential vulnerabilities on a server. See the example.", "solution": "To remove reporting of error details, server type, and version the following update can be made to the Tomcat configuration.\n\nUpdate the C:\\Program Files\\Common Files\\AspenTech Shared\\TomcatX.X.X\\conf\\service.xml to add the following at the end of the \u201cHost\u201d section (i.e. just before the line containing )\n\n\n\nAfter updating the service.xml file, Apache Tomcat should be restarted from the Windows Services for the change to take effect.\nThe updated error page will just display the HTTP Status line:", "keywords": "Tomcat \nSecure A1PE\nError page", "reference": null}, {"id": "000101700", "problem_statement": "In a PCWS operator station, users can modify parameters like the Operator Low and High Limits, as well as the Service Request, which appear in blue font. However, Engineering values like the Engineering and Validity limits or the Engineer Request will remain off-limits for Operators, thus showing an unclickable black font. This is because the Operator and the Engineering roles have specific Read and Write permissions for each DMC3 Built-in Entry.\n \nAdditionally, users can create User-defined entries with customizable access rights, as outlined in KB 000101281: How to Set a User-Defined Entry as Read-Only for an Engineer Role in PCWS. However, these permissions can solely be configured within the Calculations view of DMC3 Builder. This means we can only modify the editing privileges for User-defined Entries before deploying the controller, but not while it is running. This intentional design choice ensures that less privileged users avoid unintended alterations to tuning parameters or safety limits. Nonetheless, this approach becomes impractical when there's a need to temporarily grant or deny permissions to Operators without stopping and redeploying the controller.\n\nIn this tutorial, we will walk through the process of building an Engineer-editable switch in PCWS to manage editing rights for the Operator role using DMC3 Calculations and User-Defined Entries.", "solution": "We will illustrate the process using an example of building a switch to control the editing permissions for the MV Operator High and Low limits of a DMC3 controller.\n\nPart I. Building the Switch through User-Defined Entries and Calculations\nOpen the DMC3 Builder project for the relevant controller.\nNavigate to the Calculations node and select the 'User Entries' view from the top ribbon.\nClick on the Independent node from the Variables menu and add the following entries with their respective properties:\n Name Data Type DefaultIOFlags SecurityAccess\nOLD_OPHIREQ Double IsTuningValue StandardEntry\nOLD_OPLOREQ Double IsTuningValue StandardEntry\nOPHIREQ Double IsInput, IsOperatingValue OperatorEntry\nOPLOREQ Double IsInput, IsOperatingValue OperatorEntry\nOPLOCK OnOff IsInput, LogChange, IsOperatingValue EngineerEntry\n Now select the Inputs view from the top ribbon and add the following calculation (or import it from the attached switch_calc.xml file):\n' This calc allows Operator Low and High limit changes based on requested values\n' when the switch is in the active state (switch = 1). \n' It also ensures that the requested values have changed and that they are \n' within a valid limit range.\n\nif switch = 1 then \n if MVLoOpReq <> old_MVLoOpReq then\n if MVLoOpReq <= MVHiOpReq then\n if MVLoOpReq >= MVLoEng then \n MVLoOp = MVLoOpReq\n old_MVLoOpReq = MVLoOpReq\n else Message(\"Requested Low Limit cannot be under Low Engineering Limit\")\n end if\n else Message(\"Requested Low Limit cannot be above the requested High Limit\")\n end if\n end if\n if MVHiOpReq <> old_MVHiOpReq then\n if MVHiOpReq >= MVLoOpReq then\n if MVHiOpReq <= MVHiEng then \n MVHiOp = MVHiOpReq\n old_MVHiOpReq = MVHiOpReq\n else Message(\"Requested High Limit cannot be above Engineering limits\")\n end if\n else Message(\"Requested High Limit cannot be under the requested Low Limit\")\n end if\n end if\nend if\nThis calculation allows changes to Operator High and Low limits only if we meet ALL the following criteria:\nThe 'switch' value is On\nThe limit requested by the Operator is\nDifferent from the current value\nWithin the Engineering limit range\nLower limit requests are under High limit values and vice versa\nNow map the following variables by dragging from the 'User Entries' menu:\n Click the 'Apply' button from the top ribbon and save the project.\nDeploy and Start the controller (no need to turn it on).\nNow, if you go to a PCWS Operator station, you will notice that the Operator High and Low limits will appear in black font, meaning that the Operator user cannot edit those values:\nAccess the Details page for any MV to view the newly added User entries, including the switch \"OPLOCK\", displayed in black font, editable only by the Engineer user:\n\nIMPORTANT NOTE: Manually set the initial values for Operator High and Low limit requests (OPHIREQ and OPLOREQ user-defined entries) to match the actual MV limits before turning the Operator switch (OPLOCK) On.\n\nPart II. Testing the Switch\nChanging Operator limit requests with the switch Off will not overwrite actual Operator limits:\nTurning the switch On from the PCWS Engineer machine triggers changes to the actual Operator limits:\nConfirm the updated values on the Operator station for the corresponding MV:\nWith the switch On, send new Operator Limit values through the OPHIREQ and OPLOREQ entries:\n\nPart III (Optional). Setting up the Switch and Limit Request Values as PCWS Columns\n\nYou can follow the steps in KB 000099970: How to add User Defined Entries as PCWS columns to display the switch and the limit request values as columns in the PCWS interface.\n\nFor simplicity, you can go to C:\\ProgramData\\AspenTech\\APC\\Web Server\\Products\\APC, make a backup of the apc.user.display.config file (if there is one) and replace it with the attached template for this example. If necessary, customize the config file using Notepad. And finally, restart the Aspen APC Web Provider Data Service.\n\nAfter performing these steps, this is how the Operator station should look:\n \nAnd this is how it should look from the Engineer machine:\n \nIn summary, this solution allows dynamic control over access rights without needing to halt and redeploy the controller. Users can mimic these steps to effectively manage editing permissions for different parameters as required. This ensures an adaptable solution for tailored control over editing privileges in diverse operational scenarios.", "keywords": "Lock MV Limit, Editing Permissions, Parameter Access Control, Operator High and Low Limits, Read and Write Permissions, Dynamic Access Rights", "reference": null}, {"id": "000101788", "problem_statement": "How do I modify the user account for common ABE services on the Enterprise server?", "solution": "Common services like AspenTechSystemAgentServicexxx, AZxxxBroker, RabbitMQ, and ABEAppPool must be operational for ABE to work on an Enterprise server.\nNote: xxx stands for ABE version. For instance, it is 380 for V12, 400 for V14, and 402 for V14.2.\n The steps to modify these accounts are described below:\nOpen Services\n\nRight click on the Service that you would like to modify and select Properties (RabitMQ in this case)\n\nIn the Log On tab, enter or modify the account information then click Apply\n\n\n\nRestart the Service after modify the user account\n\n\n\nSame thing can be done with AspenTechSystemAgentServicexxx and AZxxxBroker \n\n\n\n\n\nA different procedure is used to modify the account for ABEAppPool, kindly review the following steps:\n\nOpen Internet Information Services (IIS)\n\n\nUnder Application Pools, select ABEAppPool and click on Advanced Settings...\n\n\n Select Identity row and click on the ...\n\n\nSelect Set and enter the domain name\\username and password, then click OK\n\n\nClick on Recycle... to restart the service", "keywords": "Aspen Basic Engineer, ABE services", "reference": null}, {"id": "000101777", "problem_statement": "During troubleshooting for issues with Aspen software, like aspenONE Process Explorer or AFW Security Manager, you find this DCOM error message in Windows Event Viewer:\n\nThe server {F4B466E7-0001-11D4-828F-00C04F12B1D3} did not register with DCOM within the required timeout.", "solution": "The CLSID: F4B466E7-0001-11D4-828F-00C04F12B1D3 is for the AfwAuthorizationSvc, and you may find that restarting the AFW Security Client Service temporarily resolves the issue(s) you were experiencing.\n\nThis means you have registered both the 32-Bit and 64-Bit versions of the AFW Security Client Service simultaneously, which is NOT allowed. You can confirm this by checking the registry for the AFW Security Client Service CLSID: 64FDADF2-1670-4AC4-B8C6-FE96E64FE038.\n\nHKLM\\SOFTWARE\\WOW6432Node\\Classes\\CLSID\\{64FDADF2-1670-4AC4-B8C6-FE96E64FE038}\nHKLM\\SOFTWARE\\Classes\\CLSID\\{64FDADF2-1670-4AC4-B8C6-FE96E64FE038}\n\nIf both above registry locations exist, you\u2019ll need to follow the steps from this KB article #74941 to unregister the 32-bit and 64-bit AFW Security Client Service, and re-register just one of the services: AFW Security Client Service did not install or \"Object variable or With block variable not set\" error https://esupport.aspentech.com/S_Article?id=000074941", "keywords": "AFW\nDCOM Timeout\nAfwAuthorizationSvc\nAfwSecCliSvc", "reference": null}, {"id": "000056994", "problem_statement": "Is it possible to access the plex in a Calculator block?", "solution": "Yes, it is possible.\nThe common should be accessed via an include statement in the Declaration sheet of the Fortran block. For example:\nF{5 spaces}#include \"dms_plex.cmn\"\nThe documentation about the system commons is in our User Models reference manual, Appendix A, Common Blocks and Accessing Component Data. Examples of the Fortran code are included.\n\nBelow are two examples of items to access in the Aspen Plus commons: (The file used to illustrate these two examples is attached.)\n1. How do you access pure component molecular weights?\nThe Molecular Weight is stored in the labeled common DMS_PLEX. Each type of data occupies a contiguous area within the Plex. Accessing component data in the Plex is discussed on page A-11 of the User Models", "keywords": "Fortran\ninline\nin-line\ncommon\nplex", "reference": "Manual. The example code for accessing Molecular Weight is on page A-14.\nIn general, if you want to access a physical property parameter in a Fortran block, it is easier to access them in a define statement without going to the plex.\ne.g.\nDEFINE MWH2 UNARY-PARAM VARIABLE=MW ID1=H2 ID2=1\n 2. How do you access pure component names?\nThe pure component formula is stored in the FRMULA data area. This is described with an example of the code on page A-11 of the above mentioned manual. The component ID is stored in the IDSCC data area. This is described, again with an example of the code, on page A-12.\nThere are also Utility subroutines that can be used to access the sequence number of a component. These utility subroutines are described in Chapter 4 or the User Models Reference Manual.\n Example Code\nBelow is the input language for the Fortran block:\n FORTRAN REP-STRM\nf #include \"dms_plex.cmn\"\nf #include \"ppexec_user.cmn\"\nF dimension b(1)\nF equivalence (b(1), ib(1))\nF integer dms_ifcmnc, dms_kformc\nc declare variables\nF integer i, lmw, lmwi, lfrmul, li, lidscc\nF real*8 xmw\nc\n DEFINE MWC1 UNARY-PARAM VARIABLE=MW ID1=C1 ID2=1\nc\nc print out molecular weight of methane\nC\nc get the sequence number for methane\nF i=dms_kformc('CH4')\nF write(nterm,*) 'Component number =',i\nF lmw=dms_ifcmnc('mw')\nF lmwi=lmw+i\nF xmw=b(lmwi)\nF write(nterm,*)'Molecular weight of methane from plex',xmw\nF write(nterm, *)'Molecular weight of methane from define',mwc1\nc print out the formula\nF lfrmul=dms_ifcmnc('frmula')\nF li=lfrmul+3*(i-1)\nF write(nterm, '(6x, a, 3a4)') 'Formula =',(ib(li+j), j=1,3)\nc print out id\nF lidscc=dms_ifcmnc('idscc')\nF li=lidscc+2*(i-1)\nF write(nterm, '(6x, a, 3a4)') 'Component =',(ib(li+j), j=1,2)\n EXECUTE LAST"}, {"id": "000101770", "problem_statement": "APC Gateway is a new feature to facilitate the communication between Aspen DMC3 controllers and other Aspen APC products like Aspen IQ, as well as Aspen GDOT applications, without the need of using an intermediate point for it (e.g. DCS, Aspen InfoPlus.21, etc.). More information about this feature and the configuration guide can be found on KB 000101374.\n\nWhen testing an APC Gateway connection, either on the Deployment node of DMC3 Builder or on the native APC Gateway Tester application, users may encounter a DNS resolution error like the following:\n Status(StatusCode=\"Unavailable\", Detail=\"DNS resolution failed for service: servername:port\", DebugException=\"Grpc.Core.Internal.CoreErrorDetailException: {\"created\":\"@1704904979.928000000\",\"description\":\"Resolver transient failure\",\"file\":\"..\\..\\..\\src\\core\\ext\\filters\\client_channel\\client_channel.cc\",\"file_line\":1361,\"referenced_errors\":[{\"created\":\"@1704904979.928000000\",\"description\":\"DNS resolution failed for service: SERVERNAME:PORT\",\"file\":\"..\\..\\..\\src\\core\\ext\\filters\\client_channel\\resolver\\dns\\c_ares\\dns_resolver_ares.cc\",\"file_line\":362,\"grpc_status\":14,\"referenced_errors\":[{\"created\":\"@1704904979.928000000\",\"description\":\"C-ares status is not ARES_SUCCESS qtype=A name=SERVERNAME is_balancer=0: Domain name not found\",\"file\":\"..\\..\\..\\src\\core\\ext\\filters\\client_channel\\resolver\\dns\\c_ares\\grpc_ares_wrapper.cc\",\"file_line\":724,\"referenced_errors\":[{\"created\":\"@1704904979.927000000\",\"description\":\"C-ares status is not ARES_SUCCESS qtype=AAAA name=APCV14.CORP.ASPENTECH.COM is_balancer=0: Domain name not found\",\"file\":\"..\\..\\..\\src\\core\\ext\\filters\\client_channel\\resolver\\dns\\c_ares\\grpc_ares_wrapper.cc\",\"file_line\":724}]}]}]}\")", "solution": "This error indicates a DNS resolution issue with the domain. The message \"Domain name not found\" suggests the DNS server couldn't find an IP address associated with the domain. Follow these steps to resolve the issue:\n\n1. Open Notepad with administrative rights.\n2. Edit the \u201chosts\u201d file located under C:\\Windows\\System32\\drivers\\etc.\n3. Add a line including the IP address and the fully qualified domain name of your APC Gateway server:\n\n\n\n4. Ensure the domain name in the \"hosts\" file matches the APC Gateway Host Name configured in \"Configure Online Server\":\n\n\n\nAfter making these changes, test the APC Gateway connection. It should now be successful without encountering the DNS resolution failure.\n\n\n\nIf you encounter any issues or have further questions, please reach out for assistance by opening a support case or sending an email to esupport@aspentech.com", "keywords": "APCGateway, DNS resolution, FQDN, Resolver transient failure, Grpc.Core.Internal.CoreErrorDetailException, C-ares status, Default Port 49154", "reference": null}, {"id": "000094974", "problem_statement": "Sometimes, under certain scenarios, the AspenONE Process Explorer (A1PE) Administrator may need to prevent some users or a group of users from accessing the A1PE website. To do this you can use the Authorization Rules in the Process Explorer app in the Internet Information Services (IIS) Manager.", "solution": "In the Internet Information Services Manager, go to the Process Explorer application and click in Authorization Rules:\n\nIf you do not see Authorization Rules, open Server Manager -> Add roles and Features -> check URL Authorization under Web Server / Security.\nThen, start adding the users or user groups and Deny or Allow rules as needed. By default, all active directory users will have access to the Process Explorer app. \nWhen clicking in \"Add Allow Rule...\" or \"Add Deny Rule...\" you can select to add the specific users or groups of users that want to be added into the Authentication Rule. Once you are done click on OK. \n After adding the appropriate rules, the users will be prompted for credentials while trying to access the A1PE website. \n\nNOTE: This is just a security level to access the A1PE website and it will not prevent users from performing some actions such as writing values into a tag in Aspen InfoPlus.21", "keywords": "a1PE Access\naspenONE Process Explorer\na1PE Security", "reference": null}, {"id": "000101775", "problem_statement": "When I select on the gas_bed Kinetics form to estimate the mass transfer coefficients, my simulation becomes underspecified if I'm using the User Submodel option for the isotherm.\n\n\n\n\n\nHow can I resolve this?", "solution": "This is because the estimation of mass transfer coefficients requires the calculation of the Henry constant, which is the derivative of the loading as function of composition. When using library isotherm, the Aspen Adsorption library already implements such derivative. For user submodel, the user has to provide this derivative.\n\nThe attach example illustrates what is needed.\n\nIn the flowsheet constraints, we have implemented the Extended Langmuir 1 as an example. Note the array of variables \"den\" is declared outside the \"within\" code as those expressions will be required for the derivatives calculation.\n\n\n\nFirstly, we have the user submodel for the isotherm.\n den(B3.Layer(1).FDESet) as realvariable;\n\n within B3.Layer(1)\n within Isotherm(1)\n if Isotherm_Form == \"User Submodel\" then\n within User_pPress_Isotherm(1)\n for i in FDESet do\n den(i) = 1 + sigma(foreach(j in componentlist) IP(2, j) * P(i) * y(i, j)); \n for comp in componentlist do\n w(i, comp) = IP(1, comp) * P(i) * y(i, comp) / den(i);\n endfor\n endfor\n endwithin\n endif\n endwithin\n\n ...\n endwithin\n\nSecondly, we have the required derivative calculation.\n den(B3.Layer(1).FDESet) as realvariable;\n\n within B3.Layer(1)\n ...\n\n within Kinetics(1)\n if MTC_Type == \"Estimated\" and Isotherm_Form == \"User Submodel\" then\n within MTC_Estimation(1).Henry_Coefficients(1).User_Henry_Coefficients(1)\n for i in FDESet do\n for comp in componentlist do\n // Henry constant is the derivative of isotherm (load vs. composition)\n // Henry(i, comp) = d(w(i, comp)) / dP(i, comp)\n // (u/v)' = (u'v-u'v)/v^2\n Henry(i, comp) = (IP(1, comp) * den(i) - IP(1, comp) * P(i) * y(i, comp) * IP(2, comp)) / den(i)^2;\n endfor\n endfor\n endwithin\n endif\n endwithin\n endwithin\n\nA second block using the library isotherm is used to compare the results. We can see that the results are identical.\n\nNote that in the User_Henry_Coefficient submodel, the array X(FDESet, componentlist) provides the partial pressure of components (when using partial pressure basis) or the concentration of the components (when using concentration basis). See the attached example for more details.", "keywords": "gas_bed, user isotherm, estimation", "reference": null}, {"id": "000101004", "problem_statement": "Aspen HYSYS-METTE Link Extension", "solution": "The HYSYS-METTE Link is a HYSYS extension that is used to create links between Aspen HYSYS and Aspen METTE. Aspen METTE is a powerful platform for performing thermo-hydraulic calculations used in the upstream oil and gas industry. The extension allows you to link a product stream from METTE into HYSYS, with pressure passing from HYSYS to METTE. You can also select a reference stream to update composition based on the phase flow. Injection streams going from HYSYS to METTE can be linked as well. Flowrate and temperature are sent to METTE in this case. Pressure may be sent to METTE or received from METTE, depending on if it is set on the linked HYSYS stream.\n\nPlease see attachments.\n\nMost recent update: Jan 2024", "keywords": "METTE, HYSYS, Link, Extension, version", "reference": null}, {"id": "000100523", "problem_statement": "It\u2019s critical to make sure that refinery planning models are accurate and in sync with changes in operating conditions or catalysts. However, updating planning models requires extensive effort and time due to complex and segmented workflows and lack of collaboration between planners and process engineers. Many refineries are dependent on external consultants \u2014 thereby hindering regular updates and leading to lost profits.\n\nUsing the templates provided in this example, process engineers or planners can leverage the automated and streamlined Planning Model Update (PMU) workflow powered by Aspen HYSYS to update refinery Planning models.", "solution": "Leverage the new Excel based workflow to update FCC, Reformer, and Hydrocracker PIMS sub-models.\n\nUse the Blank templates attached to Obtain plant data, Calibrate/Tune the rigorous reactor model to match plant data, then Validate model predictions. These templates can be used to create LP base and shift vectors and Update LP sub model. This also allows users to Track the HYSYS and LP model predictions closely against the Plant Data giving a Comprehensive view of Plant Data vs Rigorous Model Prediction vs LP Prediction.\n\nFor more details on the workflow of using a pre-configured Excel PMU template and pre-configured HYSYS model, use this link. (https://esupport.aspentech.com/S_Article?id=000056332)\n\nIf you are interested in the Aspen Hybrid Models for Planning, please use this link. (https://esupport.aspentech.com/S_Article?id=000098202)", "keywords": "Aspen Petroleum Refining, HYSYS, PIMS, Planning Model, FCC, Reformer, Hydrocracker, Rigorous Reactors, Refining Reactors", "reference": null}, {"id": "000101377", "problem_statement": "What option to be followed to model the effect of fire on high water percentage systems?", "solution": "Below are some suggested approaches to deal with presence of water in Blowdown Analysis tool\n1. User can ignore the presence of water and consider hydrocarbon, however this will lead to an overconservative result and oversized blowdown system. \n2. If you model water as a component under component mapping, water is accounted but water is considered as miscible with hydrocarbons and it may cause issues with blowdown flash calculations.\n3. If you try to model water as Global free water phase, in case of simple fire method; API heat flux is not applied to water phase which would lead to incorrect results. However you can consider global free water and switch from simple fire to Stefan-Boltzman fire, which accounts radiative heat flux and can be applied to entire vessel irrespective of liquid or aqueous phase.\nWith adjusted values for Stefan-Boltzman you should be able to match simple fire Heat flux.\n\nNote: Blowdown algorithm is not intended for high water % systems and there is a chance you might have convergence issues.\n\nPlease refer to KB articles below adiabatic and fire scenarios:\nWater presence in the BLOWDOWN Analysis tool in Aspen HYSYS V9 and V10: https://esupport.aspentech.com/S_Article?id=000090284\nHow to handle water presence in Blowdown analysis tool for fire scenario: https://esupport.aspentech.com/S_Article?id=000101416", "keywords": "Blowdown, High water percentage, Stefan-Boltzman fire", "reference": null}, {"id": "000101416", "problem_statement": "Which method to be followed to handle significant amount of water in Blowdown fire case scenario?", "solution": "Please refer to the attachment for available options and considerations to handle the presence of water in Blowdown fire case scenarios", "keywords": null, "reference": null}, {"id": "000101662", "problem_statement": "Aspen Flare System Analyzer values for some cases are different while accessing shared files from other regions. How do I fix this issue?", "solution": "Please follow the below steps:\n1. Close all the Aspen Flare System Analyzer files\n2. In the search box next to Start on the taskbar, type control panel. Select Control Panel from the list of results.\n3. Open control panel, Go to : Clock and Region > Region > Additional settings > Swap Decimal Symbol with Digit Grouping symbol\n\n\n\n4. Apply settings after swapping the decimals and thousands,\n5. Open the Aspen Flare System Analyzer to verify the changes", "keywords": "Comma, Decimals, AFSA, System separators", "reference": null}, {"id": "000101768", "problem_statement": "How do I troubleshoot the enthalpy related consistency errors when using the CPA (Cubic-Plus-Association) fluid package in Aspen HYSYS?", "solution": "At certain conditions you may experience the enthalpy related consistency error with the CPA fluid package. This consistency error occurs when there is a large change in PH (Pressure Enthalpy) flash than predicted in the initialization steps. If the enthalpy vs temperature of the mixture is not steep enough around the solution, the resulting temperature may not coincide with the temperature of the TP (Temperature Pressure) flash within the consistency tolerance specified. If this (consistency error) occurs it is recommended to tighten the flash tolerance from the default value of 1E-4. You can try with smaller tolerances such as 1E-6, 1E-7 or even 1E-8. This flash tolerance input can be found in the Stab Test page of fluid package. This should eliminate the enthalpy related consistency errors when using the CPA package.", "keywords": "CPA (Cubic-Plus-Association), Consistency Errors, Flash Tolerance", "reference": null}, {"id": "000101366", "problem_statement": "There's a lot of parameters and variables in the AllGlobals table, but there's no description in the documentation. What's the purpose of these settings?", "solution": "Most of the parameters and variables are of no relevance to the user. When using Aspen Plus Dynamics, we provide a table showing only relevant settings. In Aspen Custom Modeler, the AllGlobals is showing everything that is declared with \"global\" attributes. That includes the parameters/variables/stringsets/integersets declared by the MaterialStream stream type.\n\nThe attached document provides a description of each parameter.", "keywords": null, "reference": null}, {"id": "000074601", "problem_statement": "When running IQconfig, error message \"Error running Aspen Calc module\" is displayed when clicking on the \"Edit Calculation\" button.\nThis error occurs with machine installed with Aspen Calc.", "solution": "If possible, please apply the latest cumulative patches.\n * For Vista machine * This may be related to CQ00377502 - which was addressed in Aspen Advanced Control Products aspenONE V7.1 Cumulative Patch1.\n Suggested Troubleshooting Steps\nCheck that AFW is working e.g. start AFW Security Manager.\n Check if Aspen Calc is installed and functioning:\n? Close all IQconfig.exe sessions.\n? Open Services Control Panel (services.msc) and verify if AspenTech Calculator Engine service is running.\nIf it is not running, start it. If it doesn't stay running, please verify that the service' Log On As account is still valid.\n? Start Aspen Calc application.\nIf Aspen Calc application does not start, please re-install.\nOnce it started, please create a new calculation. The purpose is to identify if the Calculation Wizard starts. If you encounter an error when doing this, please check if your logon account is included in (a) Distributed COM Users; or (b) granted DCOM permissions. See kb 123261 for more information.\n? Start IQconfig and try to edit the calculation. If it fails, please proceed to the next step.\n Check related Aspen Calc dlls used by IQconfig:\n? Close all IQconfig.exe sessions.\n? Ensure that you are logon with administrative privilege.\n? Open CMD console.\n? Enter these commands to register the following files:\nregsvr32.exe \"C:\\Program Files (x86)\\Common Files\\AspenTech Shared\\AtAppCalcWiz.dll\"\n\nregsvr32.exe \"C:\\Program Files (x86)\\Common Files\\AspenTech Shared\\atac_AspenCalcWrapper.dll\"\n? Start IQconfig and try to edit the calculation. If it fails, please proceed to the next step.\n Please send to esupport@aspentech.com the following data:\n? Screen capture of your error message.\n? Aspen Calc error log - C:\\Program Files\\AspenTech\\Aspen Calc\\Log\\AspenCalcMessageLog.txt.\n? Use Process Monitor and submit the information to esupport@aspentech.com.\nDownload and install the latest process monitor from sysinternals (http://technet.microsoft.com/en-us/sysinternals/bb896645.aspx).\nClose IQconfig.exe.\nStart Process Monitor. Press CTRL+E and CTRL+X to stop and clear all events.\nOpen IQconfig.exe and navigate to this screen. Do not click on the button.\nSwitch back to the Process Monitor and press CTRL+E to start capturing events.\nSwitch back to the IQconfig.exe and click on the . The error message will pop up.\nSwitch back to the Process Monitor and press CTRL+E to stop event capturing.\nSave all events and file as PML format.\n? Save the Windows\\Application, System and Security event logs as evt formats.", "keywords": "AspenCalc", "reference": null}, {"id": "000101766", "problem_statement": "Messages in the Control Panel and History file do not have units. H\nHow do I know what units are used? Are they the units specified for the simulation?\nCan I change these units?\n\ne.g.\nSEVERE ERROR\nFLASH CALCULATIONS BYPASSED DUE TO UNREASONABLE SPECIFICATIONS. SPECIFIED\nTEMPERATURE (4.0000) IS LESS THAN THE LOWER LIMIT (10.0000).\n\nHowever sometimes there are units noted, e.g. \n * WARNING\n CHOKING CONDITION REACHED IN PIPELINE.\n CURRENT LENGTH= 5.613708D+02 FT TOTAL LENGTH= 1.049869D+04 FT\n PRES= 8.710520D+01 PSIA TEMP= 1.475600D+02 F\n MACH #= 1.027099D+00", "solution": "The units of measure of the values in the warning and error messages in the Control Panel (.cpm) or History (.his) files are in SI units unless specified.\nIn the first message above, the Temperature units are Kelvin (K).\nThe units of measure in the messages cannot be changed.\n\nThe Global Unit Set for the simulation are used for the Stream and Block results on the forms.", "keywords": null, "reference": ": VSTS 1138074"}, {"id": "000101757", "problem_statement": "How to Activate Aspen Utilities Planner-Excel Add In", "solution": "For the versions before V12, please review the KB Article: 000061797\nFor versions V12 and latter, follow these steps to activate the Utilities Planner-Excel Add in:\nOpen the Windows Start menu Select the folder >Aspen Engineering Tools> Select Aspen Excel Add-In Manager and mark the checkbox of Aspen Utilities V12/V12.1/V14.\n\n\nTo check that the add-in has been added correctly, follow these instructions:\nOpen a new Excel blank file and go to File > Options\nSelect Add-Ins > Manage: COM Add-Ins > Go\n\n\nAfter this, select the checkbox of the Aspen Utilities and this will complete the activation of Aspen Utilities planner Excel Add-in.", "keywords": null, "reference": null}, {"id": "000101100", "problem_statement": "What are some things to check when Aspen Plus is crashing?", "solution": "Below are a list of things to check and to send to Aspen Technology.\nIs the issue for one file or for all files? Is the issue reproducible?\nIs there a log file? The log files should be in \"C:\\Users\\UserName\\AppData\\Local\\AspenTech\\Aspen Plus V1x.x\". The AppData folder might be hidden on the customer's computer. If this is the case, just copy \"C:\\Users\\UserName\\AppData\\Local\\AspenTech\\Aspen Plus V1x.x\" and paste it into Windows File Explorer. Replace UserName with the real user name. After the crash happens, go to this folder and select the latest log file in this folder. We suggest deleting all existing log files before trying to reproduce the crash. \nHas the latest Cumulative Patch been installed? Have any Emergency Patches been installed?\nRun set version utility: https://esupport.aspentech.com/S_Article?id=0000512\nDoes Aspen Properties and Hysys have the same issue? Do they both run?\nPlease see if there is any \"anti-malware\" or \"virus scanner\" software installed on the machine. If there is any, you can try disabling that software temporarily and rerunning Aspen Plus to try out. Once confirmed the issue is with that, you can try to \"whitelist\" the Aspen Plus directories in them and re-enable it.\nIs an English operating system used or is there a localized version? Check if the issue still happen when using English settings.\nCheck how the file is opened.\nThere are multiple ways to open a file. \n-Open Aspen Plus, then from File | Open open a file. \n-Double click a bkp and open a file. \n-Open a file from a network drive (if using this approach, use local drive to test instead.)\nDo all methods crash?\nAll methods will require write access to the working directory. \nCopy this example bkp to a local folder with write access.\n\"C:\\Program Files\\AspenTech\\Aspen Plus V12.1\\GUI\\Examples\\Bulk Chemical\\pfdtut.bkp\"\ntry both ways to open the file. see if it makes any difference. \nCheck using Admin rights. Right click to launch Aspen Plus with admin rights if possible.\nConfirm if engine runs ok:\nCopy this file to a new folder.\n\"C:\\Program Files\\AspenTech\\Aspen Plus V1x.x\\Favorites\\testprob.inp\"\nLaunch \"Customize Aspen Plus V1x.x\" from windows start menu, it will launch a command window. \nNavigate to the folder. enter \"aspen testprop.inp\" then hit enter\n\n\nWhen done, it should have something like this. please confirm if you see this too. \nIf there is something different, e.g. pop up error, screenshot the command window. Also zip entire working directory and send it over. \n\nValidate Fortran runtime libraries.\nhttps://esupport.aspentech.com/S_Article?id=000098959\nIf using roaming user profiles try changing the working directory. See Knowledge Document 101890 for details. \nTurn on UI full log. \nThis is done by creating a new shortcut to AspenPlus.exe, placing it on the desktop and editing its Properties by adding the text /full_log to the \"Target\" line as shown below:\n\"C:\\Program Files\\AspenTech\\Aspen Plus Vxx.x\\GUI\\Xeq\\AspenPlus.exe\" /full_log\n\nSent the log file found in the following directory:\nC:\\Users\\[your user name]\\AppData\\Local\\AspenTech\\Aspen Plus Vxx.x\n If the Aspen Plus engine works in the previous step, and other items doesn't help to narrow down the issue install procmon and save and send the log (it will be large). \nhttps://esupport.aspentech.com/S_Article?id=000098961", "keywords": null, "reference": null}, {"id": "000101718", "problem_statement": "By default, Aspen ProMV uses APM site licensing. If you have MSC tokens you would like to use for ProMV, follow the steps below.\nMSC token licensing is only compatible with ProMV V12.0.5 and up.", "solution": "Open the Aspen ProMV License Declaration Utility as an administrator from the Windows Start menu\nSelect the option MSC and click OK\nFor ProMV Online users, it is necessary to restart IP.21 after making this change\nOpen Aspen Infoplus.21 Manager\nClick START InfoPlus.21 and then Yes\nAspen ProMV will now be set up and configured to use MSC tokens\n\nTroubleshooting Common Errors\nLicense checkout failed. There was a problem in getting the license SLM_RN_APM_PROMV_STD\nCause:\nAspen ProMV is looking for the corresponding APM license, but it could not be found. If you are expecting Aspen ProMV to work with MSC tokens, then this is due to not having Aspen ProMV configured to use the MSC tokens.\nSolution\nFollow the above steps to switch to using MSC token licensing.\n License checkout failed. There was a problem in getting the license: SLM_RN_APM_PMVT\nCause:\nAspen ProMV is looking for the MSC token license, but it could not be found. If you are expecting Aspen ProMV to work with MSC tokens, then this is due to 1) not having the updated MSC license file, or 2) not pointing to the proper SLM Server.\nSolution\nIn the machine where Aspen ProMV is installed, open aspenONE SLM License Manager\nClick on License Profiler\nSelect the license server in the dropdown\nClick on Load Information button\nCheck that the next licenses appear in the table:\nSLM_RN_APM_PMVT\nSLM_RN_APM_PMVT_BATT\nIf you are using ProMV Online, also check for:\nSLM_RN_APM_PMV_ON_CONR\nSLM_RN_APM_PMV_ON_BATR\nSLM_RN_APM_PMV_ON_VIEWR\nIf these licenses do not appear, you need to add the proper SLM Server with them, or get the updated MSC license file (contact CS&T or your SAM).", "keywords": "Aspen ProMV License Declaration Utility\nProMV MSC token license", "reference": null}, {"id": "000101750", "problem_statement": "When and how to use Project-Wide Run Schedule in Aspen OnLine?", "solution": "You can use Project-Wide Run Schedule if you want to:\n Schedule Equation Oriented (EO) or Real-time Optimization (RTO) model to reproduce the behavior of previous versions (V11 or older) running EO models whenever a steady state was reached.\nSchedule Sequential Modular (SM) model runs to occur sequentially, and optionally dependent on another run completing successfully, instead of each model being run independently at pre-set times. This is useful when you have two identical models for calibration and optimization, respectively.\n\nThis document shows two examples of using a Project-Wide Run Schedule in Aspen OnLine.", "keywords": "Aspen OnLine, Run Schedule, Project-wide", "reference": null}, {"id": "000101722", "problem_statement": "Is there a troubleshooting guide for Aspen OnLine?", "solution": "This document summarizes procedures for resolving the problems you may encounter while using Aspen OnLine.\n\nPlease download the attached PDF document with some tips for troubleshooting issues related to Aspen OnLine.", "keywords": "Aspen OnLine, troubleshooting guide, AOL", "reference": null}, {"id": "000101723", "problem_statement": "Jump Start Guide - Installation and Configuration of Cim-IO for Aspen OnLine and ADSA for Plant Data", "solution": "This document describes the digital twin solutions, their use cases and recommended architecture.\n\nIt summarizes the step-by-step procedure for installing and configuring:\nCim-IO interface,\nCim-IO client,\nProcess Data Server and Aspen Data Source Architecture (ADSA).\n\nIn addition, it covers the steps for verifying data access via:\nCim-IO Test API,\nAspen OnLine,\nPlant Data feature in Aspen Plus and Aspen HYSYS.", "keywords": "Aspen OnLine, ADSA, Plant Data, Cim-IO", "reference": null}, {"id": "000101756", "problem_statement": "Rack Section drawing will crash when applying on a Piperack with \"_\" in the ID.", "solution": "The simple workaround is to remove the \"_\" character from the ID tag or replace it with \"-\"\n Fixed in Version\nVSTS 961056: R&D is still looking to include the fix in the next CP for V14.2\nKeyWords\nOptiPlant V14.0, V14.2", "keywords": null, "reference": null}, {"id": "000101754", "problem_statement": "If the path to the reference file contains a comma, the 1st line and 2nd line are cut off at the comma, and can be select independently even though it points to the same file. See screenshots below.\n\nThe file link in the example is: C:\\Users\\NGUYENC\\OneDrive - Aspen Technology, Inc\\Desktop\\SSCANFINER_2dDrawing.dwg\n \n\n\n\nAfter detach, half of the long link is still there while the reference file has been removed in the Layout.", "solution": "Unfortunately, there is no workaround for this issue since this is just a visual bug.\n Fixed in Version\nIn V14.2, we generate an error message that requires the user to remove the comma in the file path.\n\n\nKeyWords\nOptiPlant V14.0", "keywords": null, "reference": null}, {"id": "000101753", "problem_statement": "Example: Creating a Case Study in Aspen HYSYS analyzing how to vary the mass flow.", "solution": "The Case Study tool allows you to monitor the steady state response of key process variables to other changes in your process. It contains built-in reporting tools to easily present your results in tabular or graphical formats.\nIn this example, a recombination of test separator data creates a crude oil stream that is mixed with a light gas stream to recreate a source production fluid. This fluid is then sent to two stages of separation so that the gas and liquids are separated at two different pressure levels.\nTo create the Case Study follow the steps:\nClick the Case Studies button in the Home tab of the HYSYS ribbon or from the Navigation Pane select the Case Studies folder and click the add button.\n \nOn the Case Study form, select the Variables Selection tab. In the Independent Variables section, click the Find Variables button.\nNote: Dependent variables are calculated by HYSYS and are shown in black numbers and the independent variables are specified by the user and are shown in blue numbers.\n The Variable Navigator view appears, allowing you to select from the Input variables. Select HP Gas | Pressure and click Add to add the variable. Then click Done to close the window.\nIn the Dependent Variables section, click the Find Variables button. Select HP Gas| Mass Flow and click Add.\n\nNote: You can add more than one dependent variable and edit the current values for each variable in the Current Values column.\n You should now have the variables listed in the Case Study, the first one being an Independent variable, and the second as Dependent.\nIn the Case Study Setup Tab, you can define the calculation states of your independent variable you wish to manipulate. In this case, HP Gas Pressure is the independent variable and it will be evaluated over the following range:\n Independent Variable Value\nStart 2068 kPa (300 psia)\nEnd 3447 kPa (500 psia)\nStep Size 344.7 kPa (50 psi)\n On the Case Study Setup tab of the Case Study form, from the Case Study Type drop-down list, select Nested and then click the Run button to calculate the Case Study.\nAfter the Case Study runs, you can click on the Results tab to view the tabular results of the Case Study.\nSelect the Plots tab to view a graphical representation of the results.", "keywords": null, "reference": null}, {"id": "000101725", "problem_statement": "Customer recently reported a memory issue with an executable which is part of the Aspen Production Record Manager (APRM) product group. They had recently upgraded from aspenONE V9.1 to V14. They had been using APRM and the Batch Conversion Utility (BCU) since before V7.3 and they had never seen this problem before. It seemed that the Aspen Production Record Manager Services (Batch21Services.exe) Windows service had very high memory usage. High CPU usage may have also been related. They needed to reboot the APRM server (or restart the APRM service) every week or else the memory used by the process would get very large. After restarting Batch21Services.exe, it would start at 250MB but periodically jump and grow to 1.6GB+. This article describes the result of the investigation.", "solution": "Aspentech tried to reproduce the problem using the BCU and populating over 30,000 subbatch instances. No extreme growth in memory was observed. Nevertheless, it was noted that our testing was done using Microsoft SQL Server.\nCustomer proactively patched the Oracle client to see if perhaps the Oracle OLEDB Provider was causing the memory leak. After patching, they saw the Batch21Services.exe memory only fluctuate in the 85-110MB range. This was compared to before applying the Oracle patch where they could only reprocess a few days worth of data before the memory grew to 1GB+.\nThe aspenONE V14 Platform Support simply indicates support for \u201cOracle Client 19.x 64-bit\u201d. After this investigation, it appears to be proven that you should not use just any 19.x version of the client, somewhere between version 19.3 and 19.20, a significant memory leak was fixed, likely the following issue:\nhttps://support.oracle.com/knowledge/Oracle%20Database%20Products/2765825_1.html#FIX - \"19c ODBC Driver Cause Memory Leak When Connecting to 19c DB (Doc ID 2765825.1)\"\nIn detail, customer patched the default Oracle 19c (19.3) client using Oracle Patch 35348034 \u201cMicrosoft Windows 32-Bit & x86-64 Bundle Patch 19.20.0.0.230718\u201d. This patch was current as of October 2023. Because APRM is 64-bit, they were using the 64-bit Oracle client and had to apply the 64-bit patch. The client patch also required that they obtain Patch 6880880 to get a newer version of the 64-bit OPatch patching utility. OPatch is what actually installs the files from the client patch.\nThanks for identifying solution goes to: Leonard Clark, Cornerstone Controls.", "keywords": "recommendation\nperformance\ncrash", "reference": null}, {"id": "000101737", "problem_statement": "Getting \"main_2014.mdf could not be load\" while launching EDR. \n\n\n\nWe also see the case such as Aspen plus V14.x stuck at start up, and get similar EDR error after some while.", "solution": "1- Please make sure main_2014.mdf is there at C:\\ProgramData\\AspenTech\\Aspen Exchanger Design and Rating V14.0\\Administrator (Administrator is the current username)\n\n2- Please run command prompt as administrator, and execute below command\n\nsqlcmd -S \"(localdb)\\MSSQLLocalDB\" -q \"select name from sys.databases\"\n\nIn a working environment, you will get this results. \n\n\n\nIn a non-working environment, you may get this results. \n\n\n\n3- Execute the follow command one by one:\n\nsqllocaldb stop MSSQLLocalDB\nsqllocaldb delete MSSQLLocalDB\nsqllocaldb create MSSQLLocalDB\n\n\n\n4- Execute the command in point 2 again to see if you can get good results. \n\n5- Launch EDR to verify", "keywords": "main_2014.mdf\nnot be loaded\nstuck", "reference": null}, {"id": "000101735", "problem_statement": "While installing aspenONE V14.x, you may get Error 1720. There is a problem with this Windows Installer package. A script required for this install to complete could not be run.\nContact your support personnel or package vendor. Custom action ExitIfUnsupportedOS script error -2147221020.", "solution": "Please follow below steps to prevent this error message:\n\n1- copy the installation files to the local computer\n\n2- Replace the aspenONEEngineeringV14_x64.msi with the one in the attachment \n\n\n\n3- You should not see this error if you try to install aspen again. \n\nNote: You will find aspenONEEngineeringV14_x64.msi and aspenONEEngineeringV14.2_x64.msi in the attachment only.\nIf you are using any other media, please ask for the related msi file from AspenTech support team.", "keywords": "1720\n-2147221020\nmsi", "reference": null}, {"id": "000101748", "problem_statement": "How to purchase three Sweet Crudes in the ratio of 3:2:1 in Aspen PIMS?", "solution": "At times, Refinery Planners / Procurement Committee might want to purchase different crudes in a specific ratio. This can be achieved in Aspen PIMS by following a very simple step of including Table Ratio.\n\nLet us consider Volume Sample Model, where you can purchase three light crudes - Alaskan N Slope (ANS), North Sea Forties (NSF) and Tiajuana Light (TJL) in the ratio of 3:2:1. In this scenario, we can create Table Ratio in the following fashion:\n\n\n\nOnce we give this as the input and execute the model, PIMS will create the following matrix structure consisting of two E-rows to achieve the given ratio:\n\n\n\nYou can also review the Full Solution Report:\n\nHere, we can notice that the three crudes are being purchased in the same ratio mentioned by the user in Table Ratio.\n\nThe Sample model has been attached along with the KB article for your further reference.", "keywords": null, "reference": null}, {"id": "000101465", "problem_statement": "How to identify if any of the tanks in an APS Model are breeching minimum / maximum capacity?", "solution": "The users can identify the tanks breeching minimum / maximum capacity by the following ways:\nGo to \u201cEvents\u201d \u00e0 \u201cInventory Problems\u201d and see the entire log of tanks:\n \n\n 2. Apart from this, APS will highlight the value above Max (with red dots) and below Min (with green dots) on the trend:", "keywords": null, "reference": null}, {"id": "000101464", "problem_statement": "Why do I receive Error code 18 after disabling TLS 1.0 and enabling TLS 1.2 settings in APS Application Server?", "solution": "You might be receiving Error code 18 after disabling TLS 1.0 and enabling TLS 1.2 settings in your APS Application Server because you may be selecting \u201cSQL Server\u201d while creating the .dsn file.\nThis is happening while using TLS 1.2 settings because \u201cSQL Server Native client\u201d is supported whereas \u201cSQL Server\u201d driver is not supported.\nTherefore, this issue can be resolved by choosing \u201cSQL Server native client\u201d while creating the .dsn file if your computer only supports TLS 1.2.\nThe problem can be easily rectified by changing the ODBC driver to the \"SQL server native client\".", "keywords": null, "reference": null}, {"id": "000101667", "problem_statement": "This video solution outlines how to add a new Cim-IO logical device on the IP.21 server using the application Cim-IO Test API.", "solution": "", "keywords": null, "reference": null}, {"id": "000101739", "problem_statement": "How can we report the Catalytic Cracker Conversion for a FCCU unit in Aspen PIMS?", "solution": "Most of the Refineries consist of a FCCU unit which will be modelled into your PIMS LP structure. Economists / Refiners are usually interested in calculating and reporting the Cat Cracker conversion. There might be different ways to calculate the same, this KB article outlines one of the methods. \n\n\u2022 Usually, the Conversion is calculated in terms of the Feed throughput minus the Unconverted fractions with respect to the overall feed in terms of percentage. Let us consider the following formula:\nCracker Conversion % = [Feed \u2013 (LCO Yield + Slurry Yield)] * 100 / Feed\n\u2022 The above equation can be incorporated into the FCCU submodel by introducing control rows and Parameter rows.\n\u2022 We can introduce two E-Rows \u2013 for equating (i) LCO yield and (ii) Slurry into new user defined columns \u2013 lco and slr:\n \nNote: Here all the coefficients in the E-Rows are same as that of the Yields of LCO and SLR.\n\n\u2022 Lastly, we can add a Numerator and Denominator type of Parameter Rows to report the Cat Cracker Conversion %:\n\n Note: Here, CFP is the vector which consists of the activity of all the feeds to the unit.\n\nOnce you execute the model, you will be able to find the Conversion% calculated and reported in the Full Solution Report as follows:\n\n\nThe model is attached along with the KB article for your further reference.", "keywords": null, "reference": null}, {"id": "000101740", "problem_statement": "How to control the Atmospheric Residue to Feed Ratio in a FCC unit in Aspen PIMS?", "solution": "Most of the Refiners wish to control the Atmospheric Residue to Feed Ratio in their FCC units to control the Carbon content entering the unit. When too much carbon from the vacuum resid portion of the Atmos Resid enters the Cat regenerator, excessive heat is caused since more Coke is now burnt off. This condition might not be good for the Regenerator as well as the environment. \n\nThis ratio can be controlled in the Planning stage itself by incorporating into our FCCU submodel in PIMS. Let us assume that the AR to Feed should not exceed 5%. There are two methods to achieve this.\n\nMethod 1: Usage of Control Rows\nWe can introduce a L (Less than or equal to) - type row in the following manner:\n \nThis equation will help PIMS to maintain the percentage of 5.\n\nAfter executing the model, we can check the imposition by analyzing the Feed pattern in the Full Solution Report:\n \n\nMethod 2: Usage of Process Limit Rows\nWe can introduce a ZLIMxxx row to incorporate the Atmospheric Residue to Feed % calculation as mentioned below:\n \n\nNow, we need to give a corresponding entry in Table PROCLIM to control this percentage less than or equal to 5:\n \n\nAfter executing the model, we can check this in both the Process Limit Summary and Submodel Summary of the Full Solution Report:\n \n\n \nThe PIMS model is attached along with the KB article for your further reference.", "keywords": null, "reference": null}, {"id": "000101738", "problem_statement": "How to accommodate the batch cycle time and what are the different ways in the Batch reactor?", "solution": "For Batch process, aspen plus provides couple of unit operations in order to facilitate batch process environment.\n BatchProcess\nBatchSep\n\nIn the BatchOp, we have multiple ways to specify the Batch cycle. The options will be available in the specifications tab.\nStart batch empty.\nBatch Charge\nBatch feed time\nBatch discharge time\nDown time\n\n\nChoosing the type of parameter depends on the data availability or the appropriate assumptions during the process of Batchop Configuration.\n\nStart batch empty option enables the user to define only down time and batch discharge time by assuming batch feed time at the Oth hour. Initially, the unit operation would be empty and exception for pad gas if the option to use it is specified.\n\nBatch charge option enables the user to specify the initial quantity either in moles or mass. Users must note that this parameter is batch charge upon the batch charge stream\u2019s flowrate.\n\nBatch feed time let users to add the batch charge for the accumulation. It is to be noted that this parameter is as same as the batch feed times the batch charge stream\u2019s flowrate. In this, batch down time and batch discharge time are available.\n\nKey words\n\nBatchcycle time, Batch process, Aspen Plus", "keywords": null, "reference": "https://esupport.aspentech.com/S_Article?id=000084096"}, {"id": "000075106", "problem_statement": "Which AspenTech folders should be given exception for an Antivirus Software?", "solution": "We recommend to give exception to all the below folders to allow AspenTech application as Antivirus software can falsely block AspenTech software. Antivirus software scanning AspenTech application can also affect the performance of our application.\n\nExclude executable files and folders in following AspenTech directories:\n\nthe machines where Aspen ENG products are installed\nC:\\Program Files(x86)\\AspenTech\\\nC:\\Program Files (x86)\\Common Files\\AspenTech Shared\\\nC:\\Program Files\\AspenTech\\\nC:\\Program Files \\Common Files\\AspenTech Shared\\\nC:\\Program Data\\AspenTech\\\nC:\\Users\\Public\\Documents\\AspenTech\nC:\\Users\\All Users\\AspenTech\nC:\\ProgramData\\SafeNetSentinel\\Sentinel RMS Development Kit\\System\n \nthe machines where the License server installed\nC:\\Program Files\\Common Files\\SafeNet Sentinel\\\nC:\\Windows\\SysWOW64\\servdat.slm", "keywords": "Antivirus\nFirewall\nException\nExclusion\nExclude\nSecurity software", "reference": ":\nAspenTech Security Notifications Email List\nWhat network security settings are required to access AspenTech web content or receive email from AspenTech?"}, {"id": "000067210", "problem_statement": "Is it possible to model Vinyle Acetate ( VAC) monomer production in Aspen Plus?", "solution": "The attached example is a model of a process for the production of vinyl acetate from its raw materials acetic acid and ethylene. The model provides an example of how to model the different areas starting from set of components and physical property parameters.\n\nThe model is not intended for equipment design or for specifying other engineering documents without further review by a process engineer with experience of VAC processes.\nThe model includes:\nA nominal set of chemical species and property parameters for this process.\nTypical process areas including: preheating, VAC reactions, CO2 separation, VAC purification and the main streams connecting these units.\nKey process control specifications such as pure acetic acid flow rate, ethylene flow rate, and specifications for distillation column.\nThis model is based upon information included in the following paper:\nLuyben, M; Tyreus. B.; \" Industrial Design and control study for the Vinyle acetate monomer Process.? Computers Chemical Engng. 1998, 22,867\n1. Components and Chemical Reactions:\nThe following components represent the chemical species present in the process:\nTable 1. Components Used in the VAC Model\n ID\nType\nComponent Name\nName\nCO2\nCONV\nCarbon dioxide\nCO2\nC2H4\nCONV\nEthylene\nC2H4\nO2\nCONV\nOxygen\nO2\nACE\nCONV\nAcetic Acid\nC2H4O2-1\nVAC\nCONV\nVinyl Acetate\nC4H6O2-1\nWATER\nCONV\nWATER\nH2O\n There are seven components in the VAC process. Ethylene ( C2H4), pure oxygen ( O2) and acetic acid (ACE) are converted into the vinyl acetate ( Vac). Carbon dioxide and water were produced as by products.\nThe reactions modeled are:\nC2H4 + ACE+1/2O2 ---> VAC + Water 25% conversion of C2H4\nC2H4 + 3O2 ---> 2CO2 + 2 WATER 5% conversion of C2H4\nThis model assumes fixed fractional conversions for each reaction. A more detailed model could model the reaction kinetics. This would require fitting of kinetic parameters to experimental data.\n 2. Process Description\nThis process model includes the following units:\nTable 2. General Unit Operations Used in the VAC Process\n Unit\nPurpose\nPreheater\nFeed preheated to a certain temperature before feeding into the reactor\n Simplified simulation with stoichiometric reactions\nCO2 Separation\nBy product CO2 removal from then process\nVAC purification\nPurify VAC\nACE recycle\nUnconverted acetic acid was recycled back to the reactor\n3. Physical Properties\nThe WILS-LR property method has been used in vapor liquid thermodynamic calculations. It involves: a) The Wilson activity coefficient model for the liquid phase b) The ideal gas equation of state for the vapor phase, c) Enthalpies with liquid as reference state for all components d) The Rackett model for liquid molar volume and e) Henry's law for supercritical components\n4. Simulation Approach\nUnit Operations - Major unit operations in this model have been represented by Aspen Plus blocks as shown in Table 3.\nTable 3. Aspen Plus Unit Operation Blocks Used in the VAC Model\n Unit Operation\nASPEN-PLUS \"Block\"\nComments / Specifications\npreheating\nHEATER\nFeed was preheating before feeding into the reactor using simple preheating block\nVAC productions\nRSTOIC\nSimplified simulations with stoichiometric reactions.\nCO2 separations\nSEP2\nSimplified operations demonstrated the need to separate CO2 from the process. More rigorous adsorption based model need to be developed\nVAC Purification\nRadFrac\nRigorous multi-stage distillation model.\n20 theoretical stages.\nVAC/ACE separation\nDecanter\n3 phase decantation\n Streams - Streams represent the material and energy flows in and out of the process. Streams can be of three types: Material, Heat, and Work.\n5. Simulation Results\nThe Aspen Plus simulation flowsheet is shown in Figure 1.\nFigure 1. VAC Flowsheet in Aspen Plus\n Key simulation results are presented in Table 5.\n Table 5. Key Simulation Results\n Plant capacity (VAC)\n10950\nkmol/day\nAcetic Acid feed\n11760\nkmol/day\nEthylene feed\n21360\nkmol/day\nOxygen feed\n12562\nkmol/day\nProduct VAC purity\n0.86\nMole fraction\n 6. Conclusion\nThe VAC model provides a useful description of the process. The model demonstrates the use of various unit operations as a guide for understanding the process and the economics. It is a starting point for more sophisticated models for plant design and specifying process equipment.", "keywords": "VAC, Ethylene, monomer, Wilson-LR", "reference": null}, {"id": "000101731", "problem_statement": "What is the difference in between the Evaluation process of Aspen Plus User-Defined Components?", "solution": "Aspen Plus provides an option in Components to define the custom (user-defined) components, in order to develop components which were not present in the current available databanks. During this process of creating components, user can find couple of options.\nEvaluate using NIST TDE\nEstimate using Aspen Property Estimation system.\n\n\nWhile Evaluating the User-Defined Component, you can use both the options to create custom components. However, there is a slight difference when it comes to accuracy.\n\nWhen you click on Evaluate using NIST TDE, it will evaluate based on National Institute of Standards and Technology database only.\n\nBut if you select the 2nd option Estimate using Aspen Property Estimation system, Aspen property estimation system is based on the following.\nMolar volume data\nvapor pressure data\nExtended Antonie vapor pressure coefficients\nIdeal gas heat capacity data\nIdeal gas heat capacity polynomial coefficients\nThe more data you will give as an input, the more accurate the component will generate as the output.\nFor more accuracy, the following property data also helps in user-defined component.\nMolecular weight\nNormal Boiling Point\nSpecific gravity at 60 deg F\nIdeal gas Enthalpy of formation\nIdeal gas Gibbs energy of formation", "keywords": "NIST, Aspen Plus Properties, User defined components", "reference": "https://esupport.aspentech.com/S_Article?id=000055648\nhttps://esupport.aspentech.com/S_Article?id=000084089"}, {"id": "000101729", "problem_statement": "What are the differences between Tables UPOOL and VPOOL?", "solution": "In our Refinery PIMS model, we might sometimes use the tables UPOOL and VPOOL for creating different types of Pooling structures. This article outlines the different usages of both the tables. \n\n1) Table UPOOL:\nLP Modelers generally make use of Table UPOOL to set up a quick User Defined Pooling / Shorthand Pooling structure. Let us consider an example of setting up a Pool for different Naphtha streams which are ultimately going to be fed to a Reformer Unit. You may want to setup this Pooling structure as this might be the actual Design setup in your refinery. In this case we will have to setup the following UPOOL table:\n\n\nPIMS will now setup the matrix structure for the above setup similar to that of a Pooling type of submodel, wherein, a Recursed Pool \u2013 RFD consisting of the components \u2013 MT1, DCT and HCH will be setup:\n\n\nThe above structure shows that the properties NPA and SPG are being recursed for the pool RFD. This can also be verified in the Validation Summary or Full Solution Report:\n\n\n2) Table VPOOL:\nOn the other hand, Table VPOOL is setup when Modelers want to achieve virtual pooling. Virtual pools are compositional pools of real streams in the model that travel together (as if in a pipe) to common destinations. Let us consider an example of two Process streams \u2013 LCN and ALK which are traveling together to form different blends \u2013 URG and UPR. If we want to achieve the same composition of both the streams to URG and UPR, then, we can proceed to setup the following VPOOL table:\n\n\nBased on the above input table, the following matrix structures will be triggered by PIMS:\n\n\nThe above structure denotes that Aspen PIMS automatically creates table Svpl to recurse on the pool composition. This will be added to the internal submodel list (but not to the model tree). In addition to this, PIMS also automatically builds a special submodel table Sxxx for each blended product XXX receiving VP1 as blending pool \u2013 this is created to maintain the ratio of the pool members to the blend XXX. This can also be verified in the Validation Summary:\n\n\n\nIn conclusion, both the tables are used for creating Pooling structures, however, the modeling requirements that call for these structures are different. If you would like to learn further, you may also refer \"Weight Sample\" AspenTech PIMS Sample Refinery model.", "keywords": null, "reference": null}, {"id": "000101727", "problem_statement": "How can we blend one Gasoline Blend into another Gasoline Blend in Aspen PIMS?", "solution": "In our refinery operations, there are instances where blending one gasoline/diesel blend into another blend (known as cascaded blending) becomes necessary due to real-time factors. This knowledge base article outlines the table setup needed in the Aspen PIMS model. For illustration, consider the following design requirement involving two gasoline blends, URG and UPR:\n\n\n\nThis can be achieved in Aspen PIMS by following the below mentioned steps:\n1) We first need to declare both the blends \u2013 URG and UPR as Specification blends in Table BLENDS: \n\n \n2) Secondly, we need to mention the components / Blend stocks of both the blends in Table BLNMIX:\n \n\n3) We can now give the specifications for both the blends in Table BLNSPEC:\n \n\nKeep in mind that both the blends must have similar specs though different limits, otherwise, can lead to potential infeasibility.\n\n4) Lastly, we need to provide initial estimates for all specs of URG identified in Table BLNSPEC in Table PGUESS:\n \n\nThis is done because properties of URG must be visible to spec blending rows for UPR, PGUESS entry forces URG properties to be recursed.\n\nWe can then provide the necessary constraints and price in Table SELL as per our Business requirements. Once we execute the Model, we can notice that URG is now blended into UPR:\n\n \nThe PIMS model with this type of modeling is attached along with the KB article for your further reference.", "keywords": null, "reference": null}, {"id": "000101721", "problem_statement": "How to simulate special baffle (partial baffle or ear baffle) for the vibration analysis in Shell and Tube heat exchanger?", "solution": "In Aspen Shell and Tube exchanger rating and design, there is no direct option to select partial baffle or ear baffle. EDR is not capable of these baffles for vibration analysis.\n\nHowever, as a workaround, we have special inlet nozzle support option available in EDR for vibration analysis as depicted in the following image:\n\nExchanger Geometry | Baffles/Supports | Tube Support | Special inlet nozzle support \u2013 Select Yes\n \nThe special inlet nozzle support provides additional support to the tubes nearest to the inlet nozzle. It only affects Vibration: Natural Frequency. This is a special support applied only to the first tube row after the inlet nozzle. Unlike Intermediate Supports, this support can be applied for either Tubes in Window or no Tubes in Window bundles. Similar to intermediate supports, it is assumed that the support does not affect fluid flow.\n\nUser should enter a sensible Distance to Shell Side Nozzle (at inlet) to locate the support. It is to be noted that for some exchangers, the inlet nozzle may be furthest from the front head. The support is positioned along the first tube row after the inlet nozzle, on the center-line of the nozzle. It is common to have the support attached to an Impingement Plate.\n\nFrom Vibration: Entry conditions, this location can suffer from specific vibration problems. This item may help to reduce or remove vibration problems in the inlet area.", "keywords": "Special nozzle, tube, support, partial baffle, ear baffle", "reference": null}, {"id": "000101695", "problem_statement": "Detailed description of vulnerability CVE-2023-44487 as advised on the https://nvd.nist.gov/ website:\nThe HTTP/2 protocol allows a denial of service (server resource consumption) because request cancellation can reset many streams quickly, as exploited in the wild in August through October 2023.\n\nThis article describes AspenTech response to vulnerability CVE-2023-445487.", "solution": "CVE-2023-445487, AKA the \"Rapid Reset\" vulnerability is described as: \"The HTTP/2 protocol allows a denial of service (server resource consumption) because request cancellation can reset many streams quickly, as exploited in the wild in August through October 2023.\"\n\nTo protect against vulnerability, concerned customers can disable HTTP/2 protocol.\n\nMicrosoft Internet Information Services (IIS)\nHTTPS can be disabled for Default Web Sites supporting HTTPS from the Edit Site Binding such as below. (Earlier Windows Servers may not support this dialog setting. For earlier Windows Servers, the user should check with Microsoft documentation).\n\n\nNote: At this time, IIS only supports HTTP2 for HTTPS.\n\nApache Tomcat\nThe Aspen deploymemnt of Tomcat 9 is not configured to support HTTP2. To have enabled HTTP2, the HTTP Connector would require configuration of an Upgrade Protocol implementation such as \"org.apache.coyote.http2.Http2Protocol' in the HTTP Connector.\n\nIf unknown modifications have been made to the deployed Tomcat configuration, customer can confirm HTTP2 is not supported by verifying there is no \"UpgradeProtocol\" (e.g ) in the Tomcat's \"conf/server.xml\" configuration file.", "keywords": "Vulnerability\nSecurity\nCVE-2023-44487\nIIS\nHTTP\nHTTPS\nHTTP/2", "reference": null}, {"id": "000101706", "problem_statement": "What is the difference in between Sensitivity Analysis and Design Spec", "solution": "Design specifications and sensitivity analyses are various features used in various use cases. They both are part of manipulators in Aspen Plus and they can be available in model palette and navigation pane. Both options can be hidden and displayed on the flowsheet.\n \n\nDesign Specification:\nThe design specifications are used to attain a desired output by setting targets on key process parameters.\nThis option can be found in Flow sheeting options and model palette.\nThis function is based on targeted constraints, is used to lead the simulation path towards a specific condition during the optimization phase. This function is an object driven, it could include reaching product purity, increasing yield, or meeting regulatory constraints.\nExample: You can utilize a design specification to setup a target for the conversion rate in a reactor or you can give the desired purity of a product stream.\nThis feature is implemented to ensure that the process satisfies specific requirements.\n\n Sensitivity Analysis:\nSensitivity analysis is used to assess the impact of variations in input parameters on output variables or key performance indicators. It let us understand how the modifications in certain factors influence the process in overall picture.\nThis option can be found in the Model analysis tools and model palette\nThis analysis is often used while the design and optimization stages to recognize the critical parameters and assess the robustness of the process.\nExample: You can perform a sensitivity analysis to know how changes in the composition of the feed, temperature, or pressure affect the yield or selectivity of a specific product.\nSensitivity analysis is implemented to involve systematically changing input parameters and observing the resulting changes in output variables.\nIn summary, design specifications are employed to set specific targets or constraints, leading the simulation towards a desired outcome. Sensitivity analysis, on the other hand, is a broader tool used to explore the sensitivity of the process to changes in input parameters, helping to identify influential factors and assess the overall robustness of the system. Both tools are valuable in the process simulation and optimization workflow, each serving a distinct purpose in achieving the desired process performance.", "keywords": "Sensitivity Analysis, Design Specifications, Aspen Plus Manipulators", "reference": "s\n\n1. https://esupport.aspentech.com/S_Article?id=000055515\n2. https://esupport.aspentech.com/S_Article?id=000056792"}, {"id": "000101707", "problem_statement": "Quick Template to check the Freezing point for the solutions using Sensitivity Analysis in Aspen Plus", "solution": "This is a quick template to check the Freezing point for the solutions using Sensitivity Analysis in Aspen Plus. The temperature point where the liquid changes its phase from liquid to solid at certain pressure (Atmospheric Pressure) is called as Freezing point or Solidification point. In this Template, Benzene, EthylBenzene and 1,4-DiethylBenzene components were taken in Peng robin son fluid package. A simple heater is utilized for cooling purpose and sensitivity analysis is done among the parameters. Liquid Phase is a determining factor, where it is a liquid or solid. User can customize according to the requirement.\n\nInputs were taken at the atmospheric pressure.\n \nSensitivity inputs would change according to the requirement.", "keywords": "Freezing Point, Condensate point, Sensitivity Analysis Template", "reference": "https://esupport.aspentech.com/S_Article?id=000082885"}, {"id": "000101712", "problem_statement": "What are the process template functions in Aspen HYSYS?", "solution": "Process templates functions in Aspen HYSYS are:\nAllow for the implementation of sub-flowsheets and template files into a given HYSYS simulation\nCan build from a blank sub-flowsheet for modularized construction of a process model\nOpen template files or import flowsheet files (.hfl) to incorporate other process models\nCan have a distinct Fluid Package, different from parent flowsheet\nDefine Fluid Packages and assignments in the from Fluid Package Associations button on the Home tab\nMake sure transfer basis between flowsheets is reasonable", "keywords": "Process, Template, Aspen HYSYS", "reference": null}, {"id": "000101711", "problem_statement": "What is the LNG Exchanger functions in Aspen HYSYS?", "solution": "LNG Exchanger can be used to represent heat transfer between multiple hot and cold streams\nAllows for application of different Fluid Packages to the various sides of the exchanger\nOverall material and energy balance is assured, but exchanger geometry is not accounted for\nOutlet specifications can be specified for the streams to fulfill the degrees of freedom\nOr choose to define other heat transfer conditions like minimum approach temperature and UA\nCan be integrated with Aspen Plate-Fin Exchanger for rigorous modeling of wound-coil heat exchanger equipment", "keywords": "LNG, Exchanger, Aspen HYSYS", "reference": null}, {"id": "000101709", "problem_statement": "What is the piping-specific unit operations available in Aspen Hydraulics?", "solution": "The piping-specific unit operations in Aspen Hydraulics are:\n- Pipe Segment\n- Complex Pipe Segment\n- Tee Junction Mixer\n- Tee Junction Splitter\n- Various Fittings", "keywords": "Hydraulics, Piping, Aspen HYSYS", "reference": null}, {"id": "000101705", "problem_statement": "What are the hydrates structures in Aspen HYSYS?", "solution": "Structure I Hydrate\nCan hold only small gas molecules such as CO2, methane and ethane inside the lattice\nStructure II Hydrate\nHold intermediate sized gas molecules such as N2 and O2 as well as hydrocarbons such as propane and iso-butane\nStructure H Hydrate\nRequire two different types of molecules\nLarge species such as 2-methylbutane and cyclo-octane and light gases such as H2S, methane and ethane", "keywords": "Hydrates, Structure, Aspen HYSYS", "reference": null}, {"id": "000101704", "problem_statement": "How many approach available for saturating a dry gas with water in Aspen HYSYS?", "solution": "There are two approaches:\nManually mixing the dry gas with sufficient water using a Mixer or Balance operation, separating out the excess liquids, then tuning with an Adjust operation\nUsing the Saturate unit operation extension to automatically saturate the gas stream with water\n\nFor the Manual Saturation Approach:\nAdd a stream of 100% water with a flow rate of approximately 1% of the inlet gas stream flow rate\nMix the gas and water streams using a Mixer or Balance (Component Mole Flow) operation\nSpecify the temperature and pressure for the combined stream (if using a Balance)\nFeed the two phase mixture to a Separator\nUse an Adjust operation to manipulate the flow rate of inlet water until the liquid flow from the Separator is a small non-zero value\nThe vapor stream leaving the Separator is now saturated with water\n\nFor using the Stream Saturator Unit Operation:\n Add to the flowsheet like any other unit operation from the Model Palette\nRequirements for use:\nAttach streams for inlet gas, outlet saturate gas, and a pure water stream\nEnsure water is included component list", "keywords": "Saturation, Dry Gas, Aspen HYSYS", "reference": null}, {"id": "000101703", "problem_statement": "How many column options in Aspen HYSYS?", "solution": "In Aspen HYSYS, there are:\nSix standard templates:\nDistillation Column\nRefluxed Absorber\nAbsorber\nReboiled Absorber\nThree phase distillation column\nLiquid-liquid extraction \nShort Distillation Column\nUse for simplified calculation\nCustomer Column Template\nBuild a customized Column", "keywords": "Column, Distillation, Aspen HYSYS", "reference": null}, {"id": "000101702", "problem_statement": "How to get rid of the following error message \u201cPlease insert the desk: 1\u201d while installing the ENG package?", "solution": "\"Please insert the disk: 1\" means the existing iso file has been corrupted or it might be blocked. In order to get rid of the following message:\n\n\n\nOur suggestion for you:\n1-) Please re-download ENG iso media from \"esupport.aspentech.com\" on your desktop.\n\nAfter you complete downloading the media, please right-click the media folder and select \u201cProperties\u201d and uncheck \u201cUnblock\u201d security option. Then, press \u201cApply\u201d and finally \u201cOk\u201d. Once done, please start the installation.", "keywords": "SLM, Installation, Desk", "reference": null}, {"id": "000101698", "problem_statement": "Clickjacking, also known as a \u201cUI redress attack\u201d, is when an attacker uses multiple transparent or opaque layers to trick a user into clicking on a button or link on another page when they were intending to click on the top-level page. Thus, the attacker is \u201chijacking\u201d clicks meant for their page and routing them to another page, most likely owned by another application, domain, or both.\nUsing a similar technique, keystrokes can also be hijacked. With a carefully crafted combination of stylesheets, iframes, and text boxes, a user can be led to believe they are typing in the password to their email or bank account but are instead typing into an invisible frame controlled by the attacker.", "solution": "Aspen Unified supports iframe by design, but to avoid Clickjacking attacks you may add a mitigation in IIS settings by adding X-Frame-Options header to responses with the \u201cSAMEORIGIN\u201d option value.\nSteps for adding header:\nSelect Start, select Administrative Tools, and then select Internet Information Services (IIS) Manager.\nIn the connections pane, expand the node for the server, and then expand Sites.\nSelect the web site where you want to add the custom HTTP response header.\nIn the web site pane, double-click HTTP Response Headers in the IIS section.\nIn the actions pane, select Add.\nIn the Name box, type X-Frame-Options.\nIn the Value box, type SAMEORIGIN.\nSelect OK.", "keywords": "Aspen Unified\nClickjacking\nIframe", "reference": "s\nhttps://cheatsheetseries.owasp.org/cheatsheets/Clickjacking_Defense_Cheat_Sheet.html\nhttps://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/frame-ancestors"}, {"id": "000101697", "problem_statement": "On Aspen Unified Scheduling V14 CP1 or below, the events strategy table in the results database was flawed in that it did not allow a strategy with duplicate assets, for example a fixed series strategy where tanks are repeated. Additionally, the table also did not specify the sequence of assets in the strategy. In CP2 the issue was fixed, however the new schema conflicts with existing data.", "solution": "If you want to upgrade Aspen Unified to CP2 and you have an active AUSResults DB which is used for Aspen Unified Scheduling (not an AUPResults DB, which is used for Aspen Unified PIMS), please open SQL Server Management Studio and expand your AUSResults DB, then expand tables, right-click aus.EventsStrategy table and click on Select Top 1000 Rows.\n\nAnd write the next query,\nSELECT TOP (1000)\n FROM [AUSResults].[aus].[EventsStrategy]\nThis will empty the table, but not erase it.\n\nYou may have more than 1000 rows to delete so rerun the query with the appropriate number of records to delete. The table must be empty before you install the CP2. Once you complete this step, you will be able to install CP2 and update the AUSResults DB via the Aspen Unified Configuration Manager.\nIf you did not delete the EventsStrategy table and you update to CP2, when trying to update the AUSResults DB, you will get the next error:\nFailed to apply database migrations: The CREATE UNIQUE INDEX statement terminated because a duplicate key was found for the object name aus.EventsStrategy and the index name 'PK_EventsStrategy. The duplicate key value is (1, 64c7f6dde98ca6f01900dfcf, 0, 1).\nCould not create constraint or index. See previous errors.\nThe statement has been terminated.\n\nIf so, please create a new AUSResults DB and drop the old one. If it is not possible, contact esupport@aspentech.com.", "keywords": "AUS, Aspen Unified Scheduling, AUSResults DB, error, update, CP2, V14, PK_EventsStrategy, 1, 64c7f6dde98ca6f01900dfcf, 0, 1.", "reference": null}, {"id": "000101691", "problem_statement": "This article described the steps that can be taken to revert the collinearity fix of a Master Model", "solution": "On the following example, we have a Master Model that has a Collinearity Fixed implemented. The following tables will show the differences between a Master Model with and without Collinearity.\n AI-2020 AI-2021 AI-2022\nFIC-2001SP -0.412672 -0.237160832 0.131835\nFIC-2002SP 0.00547831 0.36489 -0.168775\nFIC-2004SP -0.465937 -0.267772 0.148851399\n AI-2020 AI-2021 AI-2022\nFIC-2001SP -0.436086 -0.251996 0.141995\nFIC-2002SP 0.00547831 0.36489 -0.168775\nFIC-2004SP -0.436086 -0.251996 0.141995\n\nUnfortunately, there is not a direct way to un-done the collinearity. However, there is a way to revert the changes. The main thing about the collinearity fix is that the changes get implemented as a rotation of the gains. This rotation of the gain be implemented and edited from the Curve operations feature.\n\n\nOn the model show above a collinearity fix has been implemented. In this case we will follow the next steps to revert some of the changes.\n\n1.- Select a Curve from the Master Model , do right click and Select Curve Operations\n\n\n\n2.- Once the Curve Operation Panel is open you will observer that there is to Operations.", "keywords": "DMC3 Builder, Collinearity, Master Model", "reference": "and Rotate. The Reference operation is the original Master Model. The Rotate operation is the Collinearity fix that has been implemented.\n\n\n3.- On the panel select the Rotate operation and Delete it. You will notice that by selecting the Rotate operation a Gain value will be display, changing this value will not fix the problem unless you use the original value.\n\n\n\n\n\nOnce the reference is done click OK. By doing this you will see the original Gain of the Model\n\n\nThis approach will solve the problem of revert collinearity. Unfortunately, the approach cannot be applied to the full model at once, the fix has to be apply on every curve of the Master Model that has a Collinearity fix implemented."}, {"id": "000101689", "problem_statement": "This article described what are the recommended steps to address connection problems from DMC3 when it shows error 10053", "solution": "Error 10053 can be related to different causes on the Server. One problem that seems to be consistent on DMC3 could be relating to the packages of information that CIMIO send from the Online Server.\n\nThe problem can be addressed in different scenarios:\n\n1.- The communication does not happen on DMC3 multi\u2013Test Connection but is successful with one tag on Test API.\n\nIn this case, if the server can respond with one tag. This could indicate that communication is ok on both sides, and both sever can talk to each other. Nevertheless, can only process a couple of tags in every request.\n\n2.- The communication works both on DMC3 and Test API, but on DMC3 there are intermittent and sometimes Test API fails.\n\nIn this case sometimes Test Connection is successful in some cases and fails in some cases. The network may not be as loaded by still present problem.\n\nIn both cases most likely the problem can be related to the loading of the network. In the case of DMC3 some parameters can be adjusted to work with this problem.\n\nIf you open the Configure Online Server application and select the IO tab. You will see the list of IO Sources that could be used to connect the controller.\n\nIf you select one of them and click on edit it will bring all parameters that can be adjusted for the communication:\n\nNormally adjusting the Timeout parameter can give time to all connection be successful. We normally suggest a value of 30 to 45 as we have seen these values work for most systems.\n\nAlso, if your system may require you can increase the frequency parameter to 5 or 10 seconds. They will trigger a cache file well the information can be read when working Asynchronously.\n\nAnother important parameter is the List size, this will control the CIMIO package size. For TDC OPC normally a value of 300 works fine. In the case of Yokogawa 400 would be good value. This parameter most likely will be different for every OPC.\n\nThe number of Clients. A good number of clients to work is always 1. This will also prevent on overloading of information from the DMC3 side.\n\nThe next picture will show a good configuration for a Yokowaga system.", "keywords": "DMC3, OPC, CIMIO", "reference": null}, {"id": "000101688", "problem_statement": "How to disable or enable automatic name generation for block or stream in utilities planner?", "solution": "In Aspen Utilities planner, you can enable or disable automatic name generation for blocks or streams. To access this option, follow the below steps:\nSelect Tools | Settings | Check or uncheck the automatic name generation option for Blocks and Streams | Click OK or Apply", "keywords": "Name, Block, Stream, Tools", "reference": null}, {"id": "000101687", "problem_statement": "What will happen when we assign Initial as the variable specification in Aspen Utilities Planner?", "solution": "In Aspen Utilities planner, you can assign a specification to a variable either when you declare it, or after its declaration. Parameters do not have specifications, as their values are always known. You can also specify a value for a variable or parameter when you declare it or afterward.\n\nThere are three options for variable specifications:\n \n Free: A variable whose value is being solved for. If you do not specify a value, Free is assumed.\nFixed: A variable whose value is not being solved for\nInitial: A variable whose value is known and fixed at time zero for an initialization or dynamic run. This option is not recommended to use. Even if we assign spec as initial, the variable will behave as a free variable.", "keywords": "Spec assignment, initial, free, fixed, variable, parameter", "reference": null}, {"id": "000101686", "problem_statement": "How to add a water stream in Aspen Utilities Planner V14.0?", "solution": "In Aspen Utilities planner, there are different types of streams available. (Air Stream, Steam Stream, Fuel Stream etc.). Steam stream is used to represent both steam and water.\n \nSteam is the gaseous form of water. Steam is chemically the same as water. Steam differs from liquid water because it has more energy than water, and also increases in volume. Hence, in Aspen Utilities model liquid steam is considered as water.", "keywords": "Steam, Stream, Model, Utilities", "reference": null}, {"id": "000101685", "problem_statement": "Why is there a difference between the MW distribution in a reactor block (RCSTR) and the outlet stream?\n\nThis can be observed in the example HDPE.apwz which is released with Aspen Plus.\n\nFlowsheet\n\nReactor 1: no difference\n\nReactor 2: difference", "solution": "This is expected:\nThe block report shows the MWD of the polymer formed in a particular reactor.\nThe stream report MWD shows the cumulative molecular weight distribution of the polymer in the stream.\n\nIf you have only one reactor these are the same. If you have multiple reactors or if there is a recycle loop around the reactor the two will be different.\n\nFor a simple case with two reactors in series, the MWD in the outlet stream of the second reactor is calculated by adding the MWD curve from the first reactor (weighted against mass of polymer formed in the first reactor) to the MWD curve from the second reactor (weighted against mass of polymer formed in the second reactor).", "keywords": "MWD, polymer, distribution", "reference": null}, {"id": "000101684", "problem_statement": "What is the significance of Area column while entering the demand and availability profile in Aspen Utilities Planner V14.0?", "solution": "In Aspen Utilities planner, we can use the Profiles Editor option to enter data for utility demands and equipment availability. Profile data is grouped in cases. Each case contains an equipment Availability profile, a utility Demand profile, and a period set. A period set contains any number of periods, each with its own start and end time.\n\nIn general, the Demand profile shows the demand for utilities external to the utility system flowsheet, from the process units. Demands are usually linked to feed or demand blocks in the flowsheet. The Availability profile shows availability and constraints on equipment modelled in the flowsheet. Data for demands and availabilities are entered in the form of a range \u2013 Min and Max. If the value is fixed, enter the same value for the minimum and maximum.\n\nArea column is only used for documentation purpose and we must enter something in the area column in order to run the optimization without any error. For instance, we can simply enter the word \u201cArea\u201d in the column to keep the column occupied.\nIf we leave the column empty, system will generate some bugs. It will stop optimization.", "keywords": "Demand, Availability, Periods, Utility, Planner, Area", "reference": null}, {"id": "000061618", "problem_statement": "This solution outlines how to perform a clean restart of an Aspen Cim-IO Interface with Store and Forward enabled.", "solution": "The following are general guidelines for stopping and starting an Aspen Cim-IO Interface with Store and Forward enabled. The exact procedures may vary depending on the version of the Cim-IO, the interface type and configuration. Here we assume all the tags are being updated by Cim-IO client tasks running on the Aspen InfoPlus.21 server and are connected to a single Cim-IO server hosting a single Cim-IO Interface. You can extend this for your own particular requirements (multiple logical devices and multiple Cim-IO servers etc).\n\nSHUTDOWN THE ASPEN CIM-IO CLIENT AND SERVER\n\nCLIENT:\nSet IO_RECORD_PROCESSING=OFF for the IO Transfer records. Examples of IO Transfer record types can be seen in this linked article: Is it better to have a lot of \"small\" IoGetDef records or fewer \"large\" IoGetDef records?\nStop the Aspen Cim-IO client tasks using the Aspen InfoPlus.21 Manager. The specific name of the client tasks will differ due to configuration. They are usually named according to the device TSK_x_device. Where x can be \"M\" for Main tasks, \"A\" for Asynchronous tasks, or \"U\" for Unsolicited tasks.\nExample: TSK_A_CIMIO_1, TSK_M_CIMIO_1, TSK_U_CIMIO_1\nRename the CIMIO_MSG.LOG files in the .\\CIM-IO\\log directory (usually C:\\Program Files (x86)\\AspenTech\\CIM-IO\\log). A new CIMIO_MSG.LOG will be created after this point containing only new (relevant) messages.\nSERVER:\nShutdown the Aspen Cim-IO Interface server: This procedure can vary depending on the configuration. In most cases Aspen Cim-IO is configured to start / stop as a service. In this case stop the Aspen Cim-IO Manager service in services.msc, along with any additional Aspen Cim-IO Interface services if present. If using the Aspen Cim-IO Interface Manager, this can be done by right-clicking on the green circle next to the interface name and selecting \u201cStop\u201d.\nMake sure all the Aspen Cim-IO processes have stopped. When problems exist with Aspen Cim-IO, it is possible for some Aspen Cim-IO processes to remain running after Aspen Cim-IO service(s) has been shut down. These processes may be hung or unresponsive. At this point, any running Aspen Cim-IO or asyncdlgp process should be killed manually using Task Manager.\n\n\nGo to the .\\Cim-IO\\IO directory (usually C:\\Program Files (x86)\\AspenTech\\CIM-IO\\io) and delete the Aspen Cim-IO list and lock files. Updated versions of these files will be regenerated when the transfer record is re-activated in a subsequent step.\nExample filenames: CIMIO_SCAN_LIST.I_OPC_1, CIMIO_STORE_LIST.I_OPC_1, CIMIO_UNSOL_LIST.I_OPC_1\n\n*Note: If the store file or folders mentioned here contain any data then deleting them will result in loss of data. \nFor Cim-IO Interfaces version up to V10.1, delete (or move) the Aspen Cim-IO Store file (the Cim-IO Store file could be recovered by using the approach described in this linked article: How to recover an Aspen Cim-IO Store file that will not forward by standard forwarding processes). The file is located in the .\\Cim-IO\\IO folder or another location specified in the Interface configuration.\nExample filename: CIMIO_STORE_MES.TSK_A_CIMIO_1_200\nFor Cim-IO Interfaces version V11.0 and above, delete (or move) the Store Queue folders (note, the ability to recover data from these folders was only available since V12.2 (also with V12.0 ECR) - see \"General Data Recovery with the RECOVER Utility\" section in the Aspen Cim-IO User's Guide). These can be found as sub-folders of .\\Cim-IO\\IO folder and also in another location specified in the Interface configuration. The folder names will either have a reference to the name of the logical device or the Interface and it is best to sort the folders by date modified order in File Explorer given that all these folders would have likely been modified relatively recently - compared to install date).\nExample folder names where logical device is named CIMIO_1, Interface named I_OPC_1: CIMIO_STORE_MES.TSK_A_CIMIO_1_200, I_OPC_1\nRename the CIMIO_MSG.LOG files in the .\\CIM-IO\\log directory (usually C:\\Program Files (x86)\\AspenTech\\CIM-IO\\log). A new CIMIO_MSG.LOG will be created after this point containing only new (relevant) messages.\n\nRESTART THE ASPEN CIM-IO SERVER AND CLIENT\nSERVER:\nStart the Aspen Cim-IO Interface server: This procedure can vary depending on the configuration. In most cases Aspen Cim-IO is configured to start / stop as a service. In this case start the Aspen Cim-IO Manager service in services.msc, along with any additional Aspen Cim-IO Interface Service (if present). If using the Aspen Cim-IO Interface Manager, this can be done by right-clicking on the green circle next to the interface name and selecting \u201cStart\u201d.\nLook in the CIMIO_MSG.LOG for the Interface and Store & Forward startup messages.\nCLIENT:\nStart the Aspen Cim-IO client tasks using the Aspen InfoPlus.21 Manager.\nSet IO_RECORD_PROCESSING=ON for the same IO Transfer records you switched off earlier. Note, you can use SQLplus to update these fields and the knowledge base has an example of doing so which you could customize for your own requirements, see this linked article: How do I stagger the scheduling of asynchronous get records with the same frequency evenly throughout the scanning period?\nAt this point new Aspen Cim-IO scanlist files would have been created on the server. During this initial period the records may scan slow while the interface is rebuilding the scanlist files on the server. Additionally, the new scanlist files will contain a tag list corresponding to all tags present in the newly activated transfer records.\nVerify that the IO Transfer records start scanning at the specified frequency.\nLook for current data in the data records.\nAfter a clean startup, your records should all start scanning normally. At this point it may be a good idea to test Store and Forward by following instruction in this linked article: How to test Aspen Cim-IO Store and Forward\n\n\nKeyWords\nwait for async\nThis article is referenced in many articles simply as 103176 or 103176-2", "keywords": null, "reference": null}, {"id": "000101682", "problem_statement": "This article described what are the Deployment Model Options on DMC3 Builder.", "solution": "The different Model Options on the Deployment node of DMC3 Builder is a new feature that was included starting from V12.1. The main reason of having this deployment option is to decrease the size of the file when a snapshot of the controller is taken as well as reduce the memory used for DMC3 Builder when the Controller applications are deployed. This could be helpful to avoid further memory issues on the server, if the deployment is happening directly on the Online Server.\n\nThe three supported options are:\n Retain no Case Models\nRetain only Case Models referenced in the Master Model\nRetain all Case models (Requires loading all models into memory)\n\n\nRetain no Case Models \u2013 it allows to deploy the Master Model without saving any results from Model ID. The Cases of the Controller will still appear on the application but they will appear as if they havent been run.\n\n\n\nRetain only Case Models referenced in the Master Model \u2013 this option will keep only cases that we used to built the Master Model, the rest of Cases will be deleted. Show model ID results but only the one that was used to update master model.\n\n\n\nRetain All Cases \u2013 this option will keep All cases saved, regardless were used on the Master Model or Not (full project snapshot). This is the Classic DMC3 Deployment. This is the option that will consume more memory.", "keywords": "DMC3, Deployment, Models", "reference": null}, {"id": "000073630", "problem_statement": "Problem Statement\nHow to check if there is idle user connection on the server\nVersions\n2006.5 CP3, V7.1 CP1 and higher", "solution": "Idle Users Timeout\nSome customers found that after ending all Aspen Basic Engineering sessions, the Administration tool showed that users were still connected - therefore the licenses were not being freed. At 2006.5 CP3 we have implemented an idle session timeout capability which checks for idle users on a workspace and shuts them down, freeing the license.\nThis capability is deactivated by default, and may be activated by updating the following parameters in Workspace.cfg or *LibrarySet.cfg:\nServerMonitor::IdleTimeout\nThe time (in minutes) that a session can be idle before it is closed by the Server Monitor. A value of zero means that sessions will not be disconnected. Default = 0 minutes.\nServerMonitor::Interval\nThe frequency (in minutes) that the server will execute monitoring activities, such as checking for idle sessions. Default = 0 minutes.\nWe have also enhanced journaling capabilities to help diagnose issues related to session connections and disconnections\nTraceLevel::Connections (None/Normal/High/Low)\nOutputs trace associates with session connections and disconnections. This will control the extra session logging. New information printed only on \"High\". The default is \"Normal\".\nExample change to StandardLibrariySet.cfg\nServerMonitor::Interval = 10\nServerMonitor::IdleTimeout = 30\nThis would check every 10 minutes and end sessions which have been idle for 30 minutes.", "keywords": "Administration\nSession", "reference": null}, {"id": "000067216", "problem_statement": "Is it possible to model an oil shale retorting process in Aspen Plus?", "solution": "Attached is an example of a model for a fluidized-bed oil shale retorting process. It is intended to:\nProvide an example of how to model the various areas of this process\nSupply a starting set of components and physical property parameters for modeling oil shale retorting processes\nThis model is not intended for equipment design or for specifying other engineering documents without further review by a process engineer with experience of oil shale retorting processes.\n\nThis model is based on the 1986 Department of Energy (DOE) report on oil shale retort/combustion process prepared by Ammer. The DOE report covers the development of an Aspen Plus model for the fluidized-bed retort/combustion of an eastern oil shale. In the DOE report, Ammer also identified the data needs and created a simple structure for a further, more definitive model.\nThis model uses Ammer's input data as a simulation basis to generate the preliminary results. This model includes: \nA nominal set of chemical species and property parameters for this process\n Typical process areas including oil shale preheating, oil shale retorting, spent shale combustion, separation for product oil and gas, and main streams connecting these units\nReaction kinetics of oil shale retorting\n Key process control specifications such as recycled spent shale flow rate, spent shale combustor temperature, and stoichiometric coefficients for the spent shale combustion reaction\nThe attached example file attached will run in V11 and higher. This example is also shipped with Aspen Plus , and the most recent version can be found in the following directory of the Aspen Plus installation:\nC:\\Program Files\\AspenTech\\Aspen Plus Vx.x\\GUI\\Examples\\Energy\\Oil Shale\nAspen_Plus_Model_for_Oil_Shale_Retorting.apwz is a compound file containing these five files:\nAspen_Plus_Model_for_Oil_Shale_Retorting.bkp\nAspen_Plus_Model_for_Oil_Shale_Retorting.pdf\nPYROLKIN.f\nUSRPYROL.dll\nUSRPYROL.opt", "keywords": null, "reference": null}, {"id": "000100860", "problem_statement": "What is included in the Aspen In Plant Cost Estimator cost base?", "solution": "The Basis for Capital Costs includes:\n Units of measure customization.\nGeneral mechanical design rules for equipment, piping (general, material and custom), civil, steel, instrumentation, electrical, insulation and paint.\nProject costs for field supervision, domestic freight, taxes, permits, engineering, construction overhead, fees and contingency.\nWorkforce wage rates (globally and by craft), productivities, workweek, overtime, crew mixes, and craft names.\nCode of account (COA) re-definitions, additions, and allocations.\nIndexing of material costs and man-hours by COA.\nConstruction equipment rental items, rates, and durations.\nIndirect costs.", "keywords": "Cost Base, Capital Cost", "reference": null}, {"id": "000100858", "problem_statement": "How do I find the Cost Basis in Aspen Process Economic Analyzer (APEA) V14?", "solution": "To find the Cost Base with which APEA works in V14.\nIt is necessary to open APEA and look in the Help tab, and then click on \"Show Cost Basis\"\n \nYou will get a table showing the Cost Basis for that version:\n \n\n.KeyWords:\n\nCost Basis, Firs Quarter 2022, Show Cost Basis", "keywords": null, "reference": null}, {"id": "000098159", "problem_statement": "After performing a migration or an upgrade of the Aspen Watch server, when trying to delete an ACO controller application in Aspen Watch Maker, a message error shows up saying something like \u201cThe requested task was not performed. Error writing to \"IO_PUT_CBPUB_01 63 IO_VALUE_RECORD&FLD\": Field is not changeable\u201d, and the controller does not get deleted.\nThis name can change depending on the controller that is pretended to be deleted.", "solution": "The error message points to a specific record we need to identify in IP.21. To do this, go to InfoPlus.21 Administrator, and using the ribbons of the top, click on Find > InfoPlus.21 Record, then search for the record as shown in the image below:\n\nThen, the status of the IO_RECORD_PROCESSING value must be turned from ON to OFF.\n\nWhen attempting to delete unused controllers, additional records can cause similar problems, so it is necessary to identify them and change the IO_RECORD_PROCESSING value to OFF on each one.\n\nAfter performing this procedure, the controller can now be deleted successfully from the Aspen Watch Maker application.", "keywords": "Aspen Watch Performance Monitor, InfoPlus.21, Watch Maker", "reference": null}, {"id": "000101678", "problem_statement": "How to map the equipment that we miss in auto map and size in Economic Analyser tab of Aspen Plus?", "solution": "For Economic Analyser, it is important to size the equipment. Sometimes, we miss to do sizing in simulation Environment. This quick work around helps us to do auto sizing for the equipment we missed during simulation in Aspen Plus. \n For a good practice, always run the model before activating Economic Analyser.\nAfter a successful run and ensuring that there are no errors in the results, please activate the analyser.\n You can see the Analyser is being Active, post running.\n Navigate to Equipment in the Economics Ribbon.\nWhile mapping, make sure that you select the following options for auto sizing.\nMap the equipment properly and name it and cross check the descriptions.\nYou will see that the Sizing step has a warning message, to finish the evaluation click on the Size button and then Evaluate the project. If you wish to change some equipment specs before evaluating, you can click on View Equipment and modify the desired.\n Cross check the sizing is done for all the equipment. If not, remap the equipment, do sizing and reevaluate.\n Evaluation is in progress.\n You will see the Evaluation is completed and explore the results now.", "keywords": "Mapping, Economic Analyzer, Auto sizing, Aspen Plus Economic analyser", "reference": null}, {"id": "000062856", "problem_statement": "Unable to start an external task in Aspen InfoPlus.21 Manager with error message similar to below screenshots.\n\nThe output file (for example screenshot above, the output file is called TSK_U_CIMIO_1.OUT) of the task in question logged the following error message.\n\"\u201cEXTSKINI failure. ERRCODE = -26 System error -1 interfacing external task\u201d", "solution": "For external task, the checkbox beside \"External Task\" need to be ticked.\nLaunch Aspen InfoPlus.21 Manager.\nDouble-click on the task name which is unable to start under the \"Defined Tasks\" section.\nTick the checkbox beside \"External Task\" highlighted in above screenshot. (E.g. TSK_U_CIMIO_1)\nClick on UPDATE button.\nSelect the task name under \"Defined Tasks\" section again.\nClick on RUN TASK button.", "keywords": " exited with error code = 1\nPlease check task output log and event log.\nError while starting /ASPENAPC, it shows the error:\n\nHttp failure response for http:///AspenAPCService/Configuration/GetUserProfile: 500 Internal Server Error", "solution": "Before trying these steps, check if you can access the Production Control Web Server Interface, http:///atcontrol. If the following error appears, please refer to KB 000099683 instead.\n\nServer Error in \u2018/ATControl\u2019 Application.\nCould not load type \u2018System.ServiceModel.Activation.HttpModule\u2019 from assembly \u2018System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089\u2019.\n\n\n\nIf the PCWS Interface loads appropriately, but only the AspenAPC site is showing the Internal Server Error, there is likely an issue with the defined IIS Application Pool. To confirm this, you can look for \u201cIIS APPPOOL\\DefaultAppPool\u201d and \u201cClass not registered\u201d messages in the log.txt file from C:\\ProgramData\\AspenTech\\APC\\Web Server\\Log.\n\n\n\nThe correct Application Pool should be AspenAPCAppPool, as the DefaultAppPool does not support 32-bit applications. To fix this, follow these steps:\n 1. On the Web Server, open IIS Manager:\n\n\n2. Then, go to the AspenAPCService node:\n\n\n3. From the right menu, select Advanced Settings... and then click on the three dots \"...\" next to DefaultAppPool:\n\n\n4. In the next window, choose \"AspenAPCAppPool\" and click OK:\n\n\n5. Click OK again and verify you can access the AspenAPC webpage. You don't need to restart any services or IIS.", "keywords": null, "reference": null}, {"id": "000101663", "problem_statement": "How to edit the demand forecasting editor in Aspen Utilities Planner V14.0?", "solution": "Demand profile / availability profile can be set by profile editor. Click on update.\n\nDemand forecast editor allow user to set number of periods.\n\nChange number of periods, and click calc, and then save. After that pop-up show, click ok.\n\nWhen going back to profile editor, you can see the update demand/availability profile with periods.\n\n This is a way, but MS excel is recommended to use, which is easier than this interface.\nOpen AUP add in in MS Excel and attach your case \nIn Optimization group in Excel Ribbon, click Specify Periods\nSet Number of Periods\nKeep period length (hours) for each period \nClick OK", "keywords": "Demand Forecast, Editor, Periods, Utility, Planner", "reference": null}, {"id": "000101312", "problem_statement": "After setting up a subscriber IP.21 as a part of the tag replication process, when using Aspen Process Explorer to view the tag in a graphic, the tag value will appear for several seconds before changing to ***.", "solution": "The map record for a tag\u2019s definition record may be mapped to the incorrect field on the subscriber. Change the value of the map record field MAP_CurrentValue from IP_INPUT_VALUE to IP_VALUE.\n\nThe standard map records for tags created via IP_AnalogDef, IP_DiscereteDef and IP_TextDef are IP_AnalogMap, IP_DiscreteMap and IP_TextMap respectively. By default, the IP_INPUT_VALUE field in a tag is displayed in Process Explorer at the \u201cValue\u201d field (red box below). However, a subscriber machine receives new values in the IP_VALUE field in a replicated tag and these should be the values displayed in Process Explorer when connected to a subscriber machine. Since IP_INPUT_VALUE field is not updated in the subscriber for the replicated tags, Process Explorer will not display the value if its time stamp is too old and the value of the tag will appear as ***.\n\n\n\nAspen Process Explorer uses map records within IP.21 to match a field in Process Explorer to a field in a tag in IP.21. For example, the map record IP_AnalogMap has the field called MAP_CurrentValue and MAP_CurrentValue dictates what is displayed in the \u201cValue\u201d field in Process Explorer for graphics and plots for a tag. By default, this field has the value IP_INPUT_VALUE. In the screenshot below, the tag with the correct value in MAP_CurrentValue shows 8.80 whereas the tag with the incorrect map record shows ***.\n\n\n\nIn the screenshot below, IP_AnalogMap is highlighted on the left side. On the right inside the red box is the field MAP_CurentValue, which corresponds to the Value field in Process Explorer, and IP_INPUT_VALUE, which should be changed to IP_VALUE in the subscriber so replicated tag values can appear normally in graphics and plots.\n\n\n\nOnce MAP_CurentValue has been updated, be sure to restart any instances of Aspen Process Explorer, Process Graphics Editor and/or Aspen Process Graphic Studio. The effect of the change in the field may also not be apparent immediately.", "keywords": null, "reference": null}, {"id": "000101405", "problem_statement": "Reset IP.21 Historian repository configuration to help resolve unusual behavior in history repositories, such as having multiple \"active\" filesets as shown in the screenshot below.", "solution": "Resetting the history repository via this process does not delete history. However, this process will temporarily remove the ability to access historical data until the procedure is fully completed.\n\n1) With IP.21 database running, open IP.21 Administrator, expand the \"Historian\" object (screenshot below) and review the names and number of repositories. Any repository not named TSK_DHIS or TSK_DHIS_AGGR will need to be recreated during the reset process.\n\n\n2) Right click any repository and click \"Properties\".\n\n\n\n3) Write down or take screenshots of the repository settings under each tab (screenshot below). This will should be done for every repository to ensure the settings are restored the same way. Even for the default TSK_DHIS and TSK_DHIS_AGGR, this is recommended in case the default settings for those two repositories have been changed.\n\n\n\n4) Check the number of filesets in each repository. This is the number of filesets that will need to be recreated.\n\n\n\n5) Close the IP.21 Administrator and shutdown IP.21 using the IP.21 Manager (screenshot below)\n\n\n\n6) Navigate to \u201cC:\\ProgramData\\AspenTech\\InfoPlus.21\\c21\\h21\\dat\u201d and move or delete config.dat and tune.dat. Back up map.dat but do NOT move or delete map.dat.\n\nWhat information is contained in Config.dat, Map.dat, and Tune.dat?\nhttps://esupport.aspentech.com/S_Article?id=000099690\n\n7) Open the IP.21 Manager and start IP.21 database. A new config.dat and tune.dat file will be generated.\n\n\n\n8) Open IP.21 Administrator, right click the \"Historian\" object and select \"Add Repository...\"\n\n\n\n9) Review the previously recorded repository settings and make sure to configure the new repository exactly the same way. This is also a good time to check if there are multiple repositories saving to the same folder on disk. Such a scenario should be prevented.\n\n\n\n10) Right click the newly created repository (should have a grey icon) and click \"Add File Sets...\". \n\n\n\n11) Be sure the values in \"Next File Set to Create\" and \"Number of File Sets to Create\" (first screenshot below) are correct for the existing filesets. If unsure, verify by navigating to the location on disk of the filesets (second screenshot below).\n\n \n\n12) Once the appropriate number of filesets has been created for all the repositories, restart the IP.21 database.\n\n13) Upon restarting IP.21 database, open the IP.21 Administrator, expand the \"Historian\" object, verify that all repositories have a green icon and that filesets within each repository include the proper Start Date and End Date. Note that information about individual filesets is determine by the \"arc\" files corresponding to each fileset,", "keywords": "Config.dat\nMap.dat\nTune.dat\nHistorian", "reference": null}, {"id": "000081518", "problem_statement": "Why is the vapor composition Y needed in the physical property monitor user subroutine calls for liquid fugacity as given below?\nCALL PPMON_FUGLY (T, P, X, Y, N, IDX, NBOPST, KDIAG, KPHI, PHI, DPHI, KER)", "solution": "The vapor composition (Y) was added years ago to accommodate user K-values routines (PPMON_KVL) which may require both X and Y in estimating the K-values. In Aspen Plus, the K-values (KVL) are calculated by the ratio of the fugacity coefficients KVL(i)=PHILMX(i)/PHIVMX(i). If the vapor fugacity coefficient is set to 1 by using ideal gas, then the liquid fugacity coefficient is the same as the K-value. In most cases, you can call PPMON_FUGLY with both X and Y pointing to the same composition vector (X).", "keywords": "fortran\nsubroutine\nfugly\nfugacity", "reference": ": VSTS 458264"}, {"id": "000085459", "problem_statement": "Reporting the upper and lower flammability data of pure components at specific temperatures is important since there are published temperature correction correlations for explosive limit data. How do you determine the explosive point for components in Aspen Plus?", "solution": "The flammability limits for each component are available in the databanks as FLML (Lower Flammability Limit) and FLMU (Upper Flammability Limit).\n\nFLML and FLMU can be retrieved in the graphical user interface (GUI) with the following steps:\nOn the Properties Customize | User Parameters form, add FLML and FLMU to the Parameter name list (they will show as Pure Component Parameters).\nClick on Retrieve Parameters in the Tools area of the Home ribbon and the scalar values will be written back to the forms.\n FLML and FLMU are volume % in air. Note that the conditions for FLML and FLMU are at 25oC and 1 atm pressure. Furthermore, the flammability limits are for a single fuel. When interest in the flammability limits is at conditions removed from the standard conditions, these values must be corrected for temperature and pressure. Correlations exist to modify flammability limits, for example in Crowl's Chemical Process Safety, but these correlations are based on few data points and only on light gases like methane, ethane, propane, and ethylene. The only true way to know the flammability limits at elevated temperature and pressure is by experiment.\nAs stated above, flammability limits are for a single fuel. When considering a mixture of fuels, LeChatlier's Equation is used to develop the flammability limits for the gas mixture based on the standard condition single fuel values. Again, this is a situation where an equation is available, but the results tend to be poor. Experimental determination of gas mixtures at elevated temperature and pressure is the only way to be certain of safe conditions. \n\nAnother approach is to use an RGIBBS to predict the flammability limit. See knowledge document 56274 or the standalone Excel calculation for the adiabatic flame temperature knowledge document 56156 for more information.\nNote that the temperature definition of FLML and FLMU is something we did not capture in our databank yet. There is a temperature corresponding to FLML and one for FLMU that will be added to the databank in the future. The reference temperature is important because there are published temperature correction correlations for explosive limit data. If everyone using Aspen Plus assumes that the databank volume percent data values for a pure component are reported at the \"standard\" temperature of 77 F (25 C), then there is a good chance that error may be introduced into an explosive hazards analysis. Since the lower and upper explosive limits are generally used to evaluate the relative safety of a system, (i.e., is a gas or gas mixture an explosive hazard), this would be particularly important.\nTo use these parameters in a fortran user-routine, you must complete step 1 (from above) and then use the following code:\n#include \"dms_plx.cmn\"\n#include \"ppexex_user.cmn\"\nC Locate PLEX offset\n LFL = DMS_IFCMNC('FLML')\nC Locate offsets for component I\n LFLI = FL + I\nC Access property data from the Plex\n FLML = B(LFLI)\n KeyWords\nFLML\nFLMU\nexplosive\nPlex", "keywords": null, "reference": null}, {"id": "000052081", "problem_statement": "Can I use the Aspen Plus reactor models for electrolyte systems? Are there any restrictions?", "solution": "There are some restrictions applicable to the usage of reactors in electrolytic systems. In general, the apparent approach can be used for a wider range of reactor models.\n Apparent Approach\nRGIBBS can be used for vapor phase reactions only.\nAll other reactors can be used freely.\nReactions should be among apparent components.\n True Approach\nRSTOIC and RYIELD can be used freely.\n REQUIL cannot be used.\n RBATCH can be used if kinetic reactions are not present.\nWhen modeling electrolyte systems with the true component approach, RBatch calculations will be incorrect, or you may encounter convergence difficulties, if there are components that participate in both reactions and chemistry. \n RGIBBS can be used in electrolyte systems for liquid-phase reactions only. It cannot be used for vapor or solid-phase reactions.\nFor RGIBBS, there will be an error message similar to the following:\n*** SEVERE ERROR\nATOM E- IS PRESENT IN THE FEED BUT NOT AMONG THE PRODUCTS THE BLOCK MASS BALANCE WILL BE INCORRECT\nE- refers to the electron, and is part of the \"atoms\" which make up an ion.\n RCSTR and RPLUG can be used with restrictions.\nThe following restrictions apply:\n1. No electrolytic equilibrium reactions are allowed in the Reactions.\nThis is NOT allowed:\nREACTIONS : H2O <----> H+ + OH-\n 2. The reactions specified cannot involve electrolyte species.\n(Either there is no Chemistry paragraph or there is no species that participates in both Chemistry and Reactions.)\n\nThis is allowed :\nCHEMISTRY : H2O <----> H+ + OH-\nREACTIONS : A ----> B\n\n This is NOT allowed :\n CHEMISTRY : H2O <----> H+ + OH-\n REACTIONS : H2O ----> B\nTo work around these restrictions, it is possible to use Reactions only instead of Chemistry. Using a RCSTR, it is possible to combine kinetic reactions and equilibrium chemistry into one reaction ID (Either of type GENERAL or POWERLAW) and not use Chemistry for that block. Along with the kinetic reactions, define the chemistry reactions in the reaction paragraph as EQUILIBRIUM reactions and set the Concentration basis to Mole Gamma to use the parameters from the Chemistry.", "keywords": null, "reference": null}, {"id": "000101652", "problem_statement": "The BIODIESEL databank has components with names such as TAG-AAL and TAG-LMP. What do these mean?", "solution": "The components in the biodiesel databank are mostly glycerol derivatives. These are named based on the one, two, or three fatty acid groups attached to glycerol.\nThese fatty acids are used in these components:\n Name Formula of isolated fatty acid Combining form Abbreviation in TAG\nLauric Acid CH3(CH2)10COOH -laur- L\nMyristic Acid CH3(CH2)12COOH -myrist- M\nPalmitic Acid CH3(CH2)14COOH -paltmit- P\nStearic Acid CH3(CH2)16COOH -stear- S\nArachidic Acid CH3(CH2)18COOH -arachid- A\nOleic Acid CH3(CH2)7CH=CH(CH2)7COOH -ole- O\nLinoleic Acid CH3(CH2)4CH=CHCH2CH=CH(CH2)7COOH -linole- LI\n\u03b1-Linolenic Acid CH3CH2CH=CHCH2CH=CHCH2CH=CH(CH2)7COOH -linolen- LN\n\nComponents with only one fatty acid (possibly appearing two or three times) are named with a prefix (mono-, di-, or tri-), where necessary digit prefixes indicating the positions, and the combining form for the fatty acid, such as 1-monolaurin, 1,2-dilaurin, and trilaurin.\n\nComponents with two fatty acids each appearing once are named in the format SN-1-lauro-2-myristin, using the digit prefixes for the positions and the combining forms of the fatty acids.\n\nComponents with three fatty acids not all the same are named in the format TAG-LMP (representing lauric, myristic, and palmitic acids in that order) using the abbreviations from the table above, where TAG stands for triacylglyceride.\n\nThe help has been updated for V14.", "keywords": null, "reference": ": VSTS 802923"}, {"id": "000101423", "problem_statement": "What is the HTC Database in AEA?", "solution": "The Heat Transfer Coefficient (HTC) Database view contains a list of default heat transfer coefficient (HTC) values from some common fluids.\nBy default, AEA calculates the Overall U based upon the stream side film coefficient (HTC), which in turn is calculated by using stream properties.\nFor further details, refer to the article\n\nHow can I specify the Overall U for a Heat Exchanger on Aspen Energy Analyzer (AEA)?\nhttps://esupport.aspentech.com/S_Article?id=000094241\n\nIn addition, User can select the HTC value from the HTC database, and User can modify the default HTC values already available, or make available some of your own HTC values by adding them in the HTC Database view.\nThe default values for the heat transfer coefficients (HTC) in the utility database are taken from Table 3.3 - Film coefficient used for shell-and-tube heat exchangers on page 146 of the User Guide on Process Integration for the Efficient Use of Energy, Institution of Chemical Engineers. Rugby, UK (1994).", "keywords": "AEA, HTC Database, Heat Integration", "reference": null}, {"id": "000101420", "problem_statement": "Can I add utilities such as high pressure steam in Aspen plus?", "solution": "In Aspen plus, users may tag and designate material and energy streams as utilities.\nWith this information Aspen Plus can calculate the utility consumption per utility type as well as total hot/cold utility consumption in the flowsheet.\n\nAspen HYSYS provides a number of built-in utilities with defined heat capacity and temperature ranges as well as other physical properties. Users can customize this feature by adding user-defined utilities.\n\nTo support this functionality, a process utility manager user interface is provided (Home tab | Process Utility Manager).\nThis functionality is provided through the flowsheet summary (Home tab | Flowsheet).\n\n\nCan I add a process utility stream such as high pressure steam in Aspen HYSYS?\nhttps://esupport.aspentech.com/S_Article?id=000088782\n\nHow are utilities specified?\nhttps://esupport.aspentech.com/S_Article?id=000058495", "keywords": "utility, steam, cooling water, utility stream", "reference": null}, {"id": "000100774", "problem_statement": "How to edit SQL Databases Directly in Aspen Utilities Planner V12.1", "solution": "Aspen Utilities Planner V12.1 uses SQL LocalDB to manage the optimization databases.\nLocalDB is part of SQL Server Express and is installed with Aspen Utilities Planner. To view or modify the SQL optimization databases:\n\n1. Download and install SQL Server Management Studio Express (SSMS) 2014 or later. This is a tool provided by Microsoft to manage SQL databases.\n\n2. Connect to SQL server. Specify \"(localdb)\\AUPInstance39\" (*) in the Server name field.\n(*)Product version number should be input if different version.\n \n\n3. Expand the Databases node in the tree on the left side of the window. Right-click Databases, click Attach, and select the databases that you want to modify.\n\n\n4. Select the attached database that you want to modify and expand Tables under it to view all its tables.\n\n\n5. Right-click any table and click Select Top 1000 Rows to view the table or Edit Top 200 Rows to edit the table. You can also click New Query in the toolbar to create a query.\n\n\n6.Right-click the database and click Tasks | Detach to detach the database when you are done.", "keywords": "AUP, SQLDatabase, SSMS", "reference": null}, {"id": "000101651", "problem_statement": "Is there anything that can be done if an .apwz is corrupted and does not open in Aspen Plus?\nSometimes there is a message: Unknown format for Aspen Plus Document.", "solution": "Aspen Plus Compound (.apwz) files are essentially .zip files.\nOften the files can be unzipped and the files can be recovered. Often there is a corrupt .apw files; however, there is usually a backup file that can be opened.\n\nSteps:\nMake a copy of the .apwz and rename it to .zip.\nOpen the .zip file using Winzip or 7-Zip and extract the backup file and any other referenced files.\nIf the .bkp file has an extra .backup extension get rid of it and keep just the .bkp extension.\nTry to open the .bkp file\n\nIt is recommended that the .bkp file is used as the reference file type rather than the .apw file.\nIn File | Options, use the radio button to select .bkp for the \"Compound file default reference file type\":", "keywords": null, "reference": null}, {"id": "000101384", "problem_statement": "How to setup/grouping variable properties on ASW organizer in Aspen Simulation Workbook.", "solution": "The ASW Organizer is a tool added to Excel when Aspen Simulation Workbook is installed.\nThe Organizer is used as a central location to define, retrieve, sort, and organize model variables and process data tags. Within the Organizer, the Model Author can view all of the properties associated with each variable and tag.\n\n\n\nThe ASW Organizer Variable Grid displays variables as rows and variable attributes as columns. By default, the Organizer displays a subset of the available variable attributes. Use the Column Customization button to get a list of available attributes.\n+ Adding Columns. To add a new column to the variable grid, click and hold the column name (variable attribute name) from the list of available attributes, drag the header into the appropriate location of the variable grid and release.\n+ Removing Columns. To remove a column from the grid, click and hold the column header and drag it onto the column customization list.\n+ Moving Columns. To move a column, select and drag the column header to a new location in the grid.\n+ Resizing Columns. Use the Best Fit button to automatically resize all columns in the variable grid. The width of individual columns can be adjusted by dragging the edge of the column left or right in the column header.\n\nFor example, \n1. Add Group attribute on column\n\n2. Drag a Group column to column header.\n\n3. Move and locate Group attribute on column header", "keywords": "ASW organizer, Column customization, Column header", "reference": null}, {"id": "000100772", "problem_statement": "How absolute weight is calculated when measurement value is zero?", "solution": "Data reconciliation method in Aspen Utilities Planner is the followings:\n\n\nAbsolute weight is defined as follows. \n\n\nWhen measurement value is 0, absolute weight can not be calculated. \nTherefore, when measurement value is less than 0.01, measurement value will be replaced with 0.01. \nCalculation logic for absolute weight is the following. \n===========\nIf measurement < 0.01 then absolute weight = Sqrt (relative weight / 0.01)\nelse absolute weight = Sqrt (relative weight / measurement)", "keywords": "Absolute weight, Data Reconciliation", "reference": null}, {"id": "000101558", "problem_statement": "When using ABE web explorer, the software communicates with a database on a server. But, this server connection can fail due to multiple reasons.\nIn this article, you will find a troubleshooting suggestions for the server connection for HTTP Error 503", "solution": "The HTTP Error 503 is occurring because the ABEAppPool is not running.\nThe following is recommended to check to solve these errors.\nOpen IIS (Internet Information Service Manager) from windows start page\nSelect the Application Pools node\nYou will see the ABEAppPool and the status is most likely \"Stopped\"\nRight click on the ABEAppPool and select the start option\nThen try to connect again\nIf you still have the error below, then it is because the ABEAppPool has stopped again and this is likely to incorrect credentials\nThe user account running the ABEAppPool most likely had a password changed. You can update the identity on the ABEAppPool by right clicking and selecting \"Advanced settings\" from the context menu\nClick on the button in the identity field and re-enter the username and password and then try to start the application pool again.\nHere is an image of what the App pool looks like:\n\n\nAlways ensure that the latest Cumulative Patch is installed,\nAnd also, see the following KB article for server connection: https://esupport.aspentech.com/S_Article?id=000100761", "keywords": "ABE server, workspaces, connection, HTTP Error 503", "reference": null}, {"id": "000092751", "problem_statement": "Aspen Shell & Tube Exchanger can be linked to Aspen Plus to perform detailed analysis of heat exchangers. Calculations may be performed to determine a Design for a given duty, Check/Rate a given heat exchanger for a duty to see if it is over/under surfaced or perform simulation calculations for an exchanger to determine the outlet conditions (temperatures and Pressure drops) if the inlet conditions (temperature, pressure and mass flowrate) are known.\nAs in the standalone program of Aspen Shell & Tube Exchanger, the Advanced or Standard calculation methods can be set. This solution describes how this may be changed for a file embedded in Aspen Plus.", "solution": "For a HeatX block within Aspen Plus, in the Setup | Blocks | Specification form, the Shell & Tube program can be used for different types of calculations (Design, Rating, Simulation, and Maximum Fouling).\nWhen a Shell&Tube exchanger has been set, then for the Setup | Blocks | Specification | EDR Option | Calculation Options tab, the calculation mode can be set and if necessary some of the convergence parameters set. There are three Calculation Methods available: Standard, Decided by EDR, and Advanced.\n\n\nThe Calculation Method and Convergence Options can be seen inside the EDR Browser | Input | Program Options | Calculation Options. \nHowever, note that if the Calculation Method and Convergence Option are changed within the EDR Browser of Aspen Plus, then when the simulation is run again within Aspen Plus, the input items are reset as given in the EDR Options.\nIf you want to run with the local values set within EDR, then you can check the enable run and then from the drop down scroll bar to the right select run, where EDR will run locally and not cause the whole Aspen Plus simulation to resolve.", "keywords": "Aspen Plus\nAspen Shell and Tube\nAspen EDR\nHeat Exchanger", "reference": null}, {"id": "000101646", "problem_statement": "How do I connect a stream from the main flowsheet to feed into the Hierarchy and transfer only a subset of the components into the hierarchy? I have tried creating a Component List (COMP-LIST), but there are parameter errors about some of the components not in the list.", "solution": "It is possible to use a Component List to limit the number of components *displayed* in the input and results within the hierarchy. The Component List should include all of the components present. Component List is a feature in the User Interface and does not affect the engine calculations. This means that Aspen Plus does not delete the compositions of the components not in the list that are in feed streams to the hierarchy since this would cause a mass imbalance. These components are not seen in the Stream Report in the Hierarchy; however, they can be seen when viewing the streams in the Plant level if the stream is selected. This can cause confusion if all of the components present are not in the Component List (COMP-LIST).\n\nIn the attached simulation as an illustration of the issue, the VAPOR stream is sent to a Hierarchy that uses the Steam Tables for properties where WATER is the only component in the Component List even though HCl is present in the VAPOR stream.\n\n\n\nIf the HCl or any other component is not desired in the Hierarchy, a SEP or SEP2 block can be used to remove all of the undesired components.", "keywords": null, "reference": null}, {"id": "000101495", "problem_statement": "This is a corollary to another KB article: https://esupport.aspentech.com/S_Article?id=000101141, where the error 1920 was caused by a port already being taken by a different service.\n\nFirst, confirm that user performing the installer has administrator privileges.\n\nSecondly, confirm that this issue is not due to the default Process Pulse ports being taken by executing (for Data Collector) the following command in Command Prompt:\nnetstat -na | find \"8005\"\nIf no service is listening on this port, then proceed with this article, otherwise refer to https://esupport.aspentech.com/S_Article?id=000101141 and change the default ports during the installation process.\n\nDuring the Process Pulse installation of services, you may come across: Error 1920 Service Aspen Process Pulse - Data Collector UPPDCFService failed to start. Verify that you have sufficient privileges to start system services.", "solution": "Open Services\nSearch for Net.Tcp Port Sharing Service > Right-click Start (or Restart if it is already running)\nRight-click Net.Tcp Port Sharing Serivce > Properties > Change Startup type to Automatic\nGo through the installer again and it should succeed", "keywords": null, "reference": null}, {"id": "000101639", "problem_statement": "Is it possible to turn off mass balance checking in a User unit operation block?\nSometimes the USER or USER2 unit operation block intentionally does not mass balance, and it would be nice to not have warnings every time that the block executes.", "solution": "It is possible in the subroutine Fortran code to set the the USER_BALMAS integer variable in the PPEXEC_USER common block to a value of -1 to skip the mass balance check.\n\nUSER_BALMAS\nInteger Variable in PPEXEC_USER Common \nWhether Aspen Plus should perform a mass balance check on completion of user unit operation model and report warnings on mass balance errors\n0 = Perform mass balance check (default)\n-1 = Do not perform mass balance check\n\nThe default of zero will trigger a mass balance check and will result in a warning in case of mass imbalance.\nE.g.\n\n * WARNING\n BLOCK B1 IS NOT IN MASS BALANCE:\n MASS INLET FLOW = 0.12599788E-03, MASS OUTLET FLOW = 0.12599788E-02\n RELATIVE DIFFERENCE = 0.90000000E+01\n CHECK USER SUBROUTINE\n\nIf the user model sets the variable to -1, the call will be skipped and no message will be issued. \n\nAttached is an example based on our USER model of a flash that multiplies the flow by the third Real Variable REAL(3) in the USER block and skips the mass balance check with USER_BALMAS = -1 in the code.\n\n#include \"ppexec_user.cmn\"\n\nc turn off mass balance checking\n user_balmas = -1\n\nc multiply by REAL(3)\n\n mult=1\n if (REAL(3) .LT. 1e20 .AND. REAL(3) .ge. 0) mult=real(3)\n\nC Write a single confirmation line back to the control panel.\n WRITE(MAXWRT_MAXBUF(1), *) ' Mult = ',mult\n CALL DMS_WRTTRM (1)\n\n DO I=1, NCOMP_NCC\n SOUT1(I)= mult*SIN1(I)\n END DO", "keywords": "USER\nUSER2\nCQ00452260", "reference": null}, {"id": "000101512", "problem_statement": "How to toggle between the default APM license type and token licenses in Aspen Fidelis?\nWhen setting up Aspen Fidelis licensing it is required to specify the license type in order for the correct license to get checked out. In order to do these users will need to utilize the Fidelice License Declaration utility.\nIf user do not select the correct license type they are going to get an error that tells them \u201cLicense Checkout Failed\u201d even if the correct license is installed", "solution": "Open Fidelis License Declaration from within the Windows Menu\nSelect the correct License type\nOpen Aspen Fidelis and it will now checkout the correct License and no longer show the error message", "keywords": "License failed\nWrong license\nBuckets\nSLM", "reference": null}, {"id": "000101506", "problem_statement": "How to clean the data in a historian before importing it into Aspen ProMV?\nIt is often required to clean the data available in a data historian before importing it into Aspen ProMV. Using Aspen One Process explorer ranges of undesired data can be excluded in order to import only the desired data into Aspen Promv Desktop", "solution": "Open Aspen ProMV Online\nSelect AspenONE Process Explorer from the option in the top right corner visible when selecting the wrench icon\nOn the dashboard select the Process Explorer icon\nType the tag name into the tag selection filter and select all the required tags from the menu and the tag will be trended below\nActivate the Cleaning rules menu by clicking on the pencil symbol in the bottom left corner above the tag list, followed by the last icon with the brush symbol.\nEliminate unwanted data using the Manual range editor and marking the unwanted data as Bad\n\n\n Select Add once satisfied with the selection range, the excluded data will show in yellow\nIn the next menu confirm the selection by clicking OK\nSelect the export data option and use the export to csv option\nCsv files with be generated with the clean data and can be downloaded.\nThis data can now be used to import into ProMV Desktop using the usual workflow.", "keywords": "Data audit\nData clean up\nRemoving bad data\nSelecting data", "reference": null}, {"id": "000101499", "problem_statement": "When viewing agents in Aspen ProMV Online, all models show:\n\"Calculation error. Read tag data failure\"\nThe RabbitMQ error logs show:\n\"[Error] Executor.cs::PublishRunningState Failed to publish running state: RabbitMQ.Client.Exceptions.ChannelAllocationException: The connection cannot support any more channels. Consider creating a new connection\"\nLogging into the RabbitMQ Management site shows the following:\nAll connections are either blocking or blocked\nNo messages are being passed, and the measured disk space is extremely low, even though the available storage space on the machine is enough", "solution": "The lack of perceived disk space found by RabbitMQ causes it to block all connections due to lack of resources. In this case, this was due to a machine using a non-English machine with non-ASCII codes as paths. This issue was fixed in RabbitMQ V3.9.13 (https://github.com/rabbitmq/rabbitmq-server/releases/tag/v3.9.13):\n\nUpgrading from V3.9.12 to V3.9.13 resolves the issue.", "keywords": null, "reference": null}, {"id": "000101435", "problem_statement": "For Mtell Alert Manager, the URL used to access the MAM dashboard is set during the configuration process, which is displayed in the alert emails. But for users who want both Mtell View and Mtell Alert Manager links in their emails, these two links may be different depending on how the server name is defined. For example, the Mtell View link may be http://server-name/Aspentech/AspenMtell/MtellView/, while the MAM link is http://server-name.corp/MAM. Clicking the Mtell View link will lead to an error and require manually adding .corp or a different suffix behind the server name.", "solution": "By default, Mtell View uses the Server Name defined under System Manager > Settings > System Settings > General. But different corporate networks may require extensions such as *.corp or other customized versions to be added onto the server name in order to access certain URLs. This is automatically done for MAM through the configuration app. For Mtell View, go to System Manager > Settings > System Settings > Reporting, and then change the field \"Server Name for Email Links\" to the full server name, for example \"server1.aspentech.corp\" instead of just \"server1\".", "keywords": null, "reference": null}, {"id": "000101418", "problem_statement": "Often occurring after upgrades, you may see a dialog box after trying to login to Process Pulse which simply says 'Failure'.", "solution": "This error typically occurs when a previous Process Pulse installation is uninstalled, then a new version is installed without going through the upgrade process. Part of the upgrade involves updating the SQL database which a \"clean\" installation bypasses.\nYou can verify if this database update took place by navigating to C:\\ProgramData\\AspenTech\\CAMO\\EPP\\DatabaseCreationLog, then open the Database text file which should show something similar as shown below. In this example, a V12.2 Process Pulse database is successfully migrated to V14. If your Database file is missing logs indicating an upgrade to your newest version, then you will need to move to the next step.\nUninstall the most recent version of Process Pulse, then install the previous version. Go through the new version installer you plan to upgrade to and choose the Upgrade/Repair option. This should launch the database migration process and resolve the login 'Failure' error.", "keywords": null, "reference": null}, {"id": "000101406", "problem_statement": "When configuring Mtell Alert Manager through the configuration app, you may see:\n\"There was an error please try again: Exception of type 'APMSeeCommon.Exceptions.APMServicesNotRunningException' was thrown\"", "solution": "This error means one or both of the MAM services are not running.\nOpen Services\nFind \"APM See Data Collector Service\" and \"APM See Data Transformer Service\"\nRight-click > Start if either is not running, or Right-click > Restart on both services\nThe configuration app should now complete successfully", "keywords": null, "reference": null}, {"id": "000101373", "problem_statement": "The ENVI (Environment for Visualizing Images) format is the standard format for hyperspectral images in Aspen Unscrambler HSI. It contains general raster data arranged as a binary sequence. When opening the ENVI data files through Unscrambler HSI, they must be accompanied with an associated header file (extension .hdr) containing metadata for the image in ASCII format (in the same directory).\n\nHowever, some instruments or processing packages may not have ENVI as an export format.", "solution": "It is recommended to first verify if the processing or instrumentation software used to generate the hyperspectral imaging data can output to ENVI format. However, if this is not an option, there are a number of available third-party software packages which may be capable of converting to ENVI. Note that AspenTech is not associated with the following software products.\nGDAL is a translator library for raster data formats, and the gdal_translate command is capable of converting between formats listed on their webpage here: https://gdal.org/drivers/raster/index.html.\nThe MATLAB command enviwrite can be used to write hyperspectral data to the ENVI format: https://www.mathworks.com/help/images/ref/hypercube.enviwrite.html\nThere are also a number of open-source Python projects capable of reading/writing ENVI files such as Spectral Python\nOnce converted, make sure to import the data with both the ENVI data file and its associated .hdr file in the same location.", "keywords": null, "reference": null}, {"id": "000101367", "problem_statement": "Where can I find the Aspen Mtell Alert Manager logs?", "solution": "The Mtell Alert Manager logs can be found in the directory:\nC:\\Program Files\\AspenTech\\Aspen Mtell Alert Manager\\Logs\n\nHere you should see logs from the different MAM services, such as the Data Collector, Data Provider, and Data Transformer logs.", "keywords": null, "reference": null}, {"id": "000101331", "problem_statement": "How do I uninstall PyCamo for Aspen Unscrambler?", "solution": "Open Command Prompt as administrator, or open a code editor terminal\nExecute \"pip uninstall PyCamo\" or \"py -m pip uninstall PyCamo\" if Python is not in the system PATH\nWait for command to complete and PyCamo is uninstalled", "keywords": null, "reference": null}, {"id": "000101107", "problem_statement": "When querying a tag either from Map Sensors in Mtell System Manager or quick SPC in Mtell Agent Builder, tag data will be in a loading state for a long time before it gets timed out. Live agents could not be processed and appears to be in a 'Awaiting Data Acquisition' status. When looking through the logs in System Health, a connection timeout error message was found. \nWhen checking the data source connection in System Manager and the Honeywell PHD adapter, both shows Connection Test Successful.", "solution": "Firstly, the above connection timeout error message suggests that the query to the historian was made but no response was received. Both data source connection in System Manager and the Honeywell PHD adapter shows Connection Successful indicating that Mtell is able to communicate with the adapter, but does not suggest a good connection to the historian. \n\nA possible cause for this problem is the application pool configuration in IIS. Check if AspenMtell_Phd application pool is used by an application. If it is, it will show 1 under Applications; if not it will show 0. Below is an example when the AspenMtell_Phd application pool was not properly configured. \n\nNext, browse to Sites/Default Web Site/AspenTech/AspenMtell/Adapter/HoneywellPHD and click on Advanced Settings... to open the Advanced Settings window. Make sure AspenMtell_Phd is selected in the Application Pool box.\n\nAfter that, check if AspenMtell_Phd in the application pool is now showing 1 under Applications. \n\nVerify if this has been resolved by browsing historian tags with quick SPC.", "keywords": "Honeywell PHD\nAgents not processing\nConnection issue\nSystem.ApplicationException: Connection Timeout\nAgent Service", "reference": null}, {"id": "000101090", "problem_statement": "Aspen Process Pulse uses .NET 4.8 as a dependency, and reinstalling it may help to resolve certain issues. However, on newer versions of Windows Server, where this particular issues was observed on Windows Server 2022, uninstalling .NET 4.8 and then reinstalling leads to the message \".NET framework 4.8 is already installed on this computer\", which disallows the user from installing .NET 4.8 properly.", "solution": "Both Server Manager and Powershell depend on the .NET Framework 4.8, so uninstalling it leads to a dependency conflict, as trying to reinstall .NET 4.8 the system will try to launch Server Manager, which no longer has all of its dependencies.\n\nIn order to properly reinstall .NET 4.8 , download the offline installer (https://support.microsoft.com/en-us/topic/microsoft-net-framework-4-8-offline-installer-for-windows-9d23f658-3b97-68ab-d013-aa3c3e7495e0), then execute the following commands through Command Prompt in the same directory with the offline installer:\n\"ndp48-x86-x64-allos-enu.exe\" /q /norestart /ChainingPackage ADMINDEPLOYMENT\nDISM /online /enable-feature /featurename:NETfx4ServerFeatures /all\nDISM /online /enable-feature /featurename:NETfx4", "keywords": null, "reference": null}, {"id": "000100703", "problem_statement": "What are the general performance benchmarks for running Aspen Mtell?", "solution": "Guidance for Mtell servers:\nRemove old log files every few months, especially the agent service log folder which generates the most log entries. Log files are created on a daily basis allowing deletion of older logs while preserving the latest.\nCheck the Message Retry section in the Aspen Mtell System Manager at regular intervals to manage pending or undelivered messages, especially after a network outage.\nCheck the runtime statistics and ensure the total execution time of the different execution cycles does not exceed the rate at which the cycles are scheduled to execute.\nGuidance for optimizing performance:\nFor systems with large numbers of agents, best practice is to delete unused and/or redundant agents if they are no longer needed.\nIt is also recommended to exclude Aspen folders from antivirus scans by following this KB article: https://esupport.aspentech.com/S_Article?id=000096542\nConsider partitioning services between separate machines, such as dedicated servers for the Mtell server and the SQL server \nExample customer benchmark #1:\nServer configuration and workload:\nServer CPU with 16 virtual processors\n64GB RAM\n1000+ live machine learning agents\n3000+ sensors\n500GB storage\nThis is considered a large and mature Mtell implementation, processes are able to run comfortably without bottlenecking.\n\nExample customer benchmark #2:\n\nServer configuration and workload:\nServer CPU with 8 virtual processors\n32GB RAM\n400+ live machine learning agents\n2400+ sensors\n250GB storage\nThis is from a new implementation where the customer is just starting to deploy and apply Mtell in production, may be considered moderately sized\n\nMinimum specifications:\n\nIn order to find what the minimum specifications are for Aspen Mtell, go to https://www.aspentech.com/en/platform-support > Click on the appropriate version number > Click on Asset Performance Management (APM). This will download a PDF file which contains the minimum requirements for running Mtell depending on the application.", "keywords": null, "reference": null}, {"id": "000069036", "problem_statement": "Error message when launching Aspen Process Explorer:\nSecurity Initialization Error: Init Cache COM Error, Number: -2147467259", "solution": "To resolve this error, please check the following:\n1. Make sure that the default web site is running within IIS. If the default website is not started, this could be the cause of the problem (check in IIS manager).\n2. Enable WRITE access for all users to the following folder: C:\\ProgramData\\AspenTech\\AFW . In some cases it may also be necessary to look at the permissions of the folder inetpub\\wwwroot\\aspentech and add Authenticated Users to the folder permissions.\n3. Open AFW Tools (click the Start icon and type AFW Tools), \"Client Registry Entries\" tab, and double-click on the 'URL' line. This should be pointing to the Aspen Local Security security server (e.g., http:///AspenTech/AFW/Security/pfwauthz.aspx ). If the URL is blank, or if the node name of the security server is wrong, please change it immediately. Also refer to solution #107935 ( https://esupport.aspentech.com/S_Article?key=107935 ).\n4. Open Windows Explorer and navigate to C:\\ProgramData\\AspenTech\\AFW . This is the location of the security cache files. Are there four files in this folder? If the cache files are missing (or if there is only one file here), please check the security server URL in AFW Tools, or run the SSTEST diagnostic utility as explained below. Note: Sometimes these four cache files become corrupt. It may be necessary to delete these cache files in order to resolve the InitCacheCOM error message. They will be regenerated the next time APEx starts up.\n5. Run the SSTEST utility you can find in Solution #106497 ( https://esupport.aspentech.com/S_Article?key=106497 ). This utility will report any errors it finds on security, and dump the results into the sstestresults.txt file. If in doubt, please forward the results to your nearest Support Center.\n6. Open Internet Explorer and try accessing the security server URL found in the AFW Tools (e.g. http://MyLocalSecurityServerName/AspenTech/AFW/Security/pfwauthz.aspx). Under normal circumstances, you should see a blank page. If an error page is returned, please check the point 7 below, and run SSTEST to diagnose the problem. If you have changed AFW server ensure that this AFW server is also the correct user for the user you are configured as there maybe multiple AFW servers in your environment.\n7. Open Internet Services Manager, select Sites and verify the TCP Port number of the Default Web Site: \n* Look at the TCP Port number. If this port number is anything other than 80, you will need to amend the security server URL to read: http://MyLocalSecurityServerName:xxxx/AspenTech/AFW/Security/pfwauthz.aspx, where xxxx is the port number you found in the TCP Port field.\n* Dynamic content, such as that in PHP and ASP.NET applications, needs IIS script permission and read access. If executables need to be run as well, they need to have the IIS execute permission and they need to be properly configured in the CGI Restriction List.\n8. Open the Windows Task Manager and see if you have Apache.exe running in the list of running processes. It usually gets installed with Oracle and it uses Port 80 which interferes with IIS since IIS is also configured to use that port. Shutting down the Apache.exe service should fix the problem as long as there are no other Security Access related issues.\n9. Some users have also noticed the following case. Suddenly, the Aspen InfoPlus.21 clients receive a 'security cache com' error and roles and applications are no longer shown in the AFW Security Manager. The problem disappears by itself a short time later, without doing any reconfiguration of the machine. They just noticed an unusual high CPU-load while the problem was there. This can be a network-related issue (resolving users & groups from trusted domains). Indeed there are a lot of network services that have to work correctly in order for our software to work, especially when security is used. Another thing to think about is that the Microsoft ADSI components (Active Directory Service Interfaces), which are used to read group information from the network, may be intermittently failing. This isn't so common, but we have seen it once or twice. The Microsoft ADSViewer can be used to troubleshoot ADSI connectivity issues.\n\n10. UPDATE - June 2023 - For one customer the problem was caused by the 'AFW Security Client Service' service not running on the end user system. In that specific instance the service was not starting because a named account was being used to start the service and the password associated with the account had been changed at the operating system level but not changed in the Windows services applet. To fix the problem the customer needed to double-click on the service and change the password via the 'Log On' tab and then start the service.\n\nNote: Named accounts can be used to start additional AspenTech services; the following screenshot illustrates sorting the services according to logon type (and in this instance the named account being used is an account called \"InfoPlus.21\" on the local machine (the .\\ refers to the local machine):\n\n\nIf the account needs to be changed for the AFW Security Client Service service it will likely need to be changed for additional services.\n If you have followed every single step above and are still experiencing the same problem, then please check in the Windows Event Viewer log file to see if you are getting any type of DCOM error. If so, then please follow the link below to a Microsoft Knowledge Base Article for a list of DCOM errors. If the DCOM error you got in your Windows Event Viewer matches the DCOM error listed in the Microsoft KB Article, then please follow the resolution provided within the article to resolve the problem.\n****** The link to this Microsoft Knowledge Article is: ****** \nhttps://msdn.microsoft.com/en-us/library/windows/desktop/dd542643(v=vs.85).aspx\n KeyWords\n-2147467259\n111391\ninit cache com\nprocess explorer\nsecurity\nAPE\nDCOM", "keywords": null, "reference": null}, {"id": "000098775", "problem_statement": "Query example to change the Repository for an individual tag or all tags within a Definition Record", "solution": "Before changing the repository of a Tag take a look to the following Kb article What are the consequences of moving a tag from one repository to another repository?\n\n1.- Change Repository for all tags within a Definition Record:\n\nUPDATE IP_ANALOGDEF SET IP_ARCHIVING = 'OFF';\nUPDATE IP_ANALOGDEF SET IP_REPOSITORY = 'REPOS1';\nUPDATE IP_ANALOGDEF SET IP_ARCHIVING = 'ON';\n\nWhere REPOS1 is the new Repository.\n\n2.- Change Repository for an individual tag within a Definition Record:\n\nUPDATE IP_ANALOGDEF SET IP_ARCHIVING = 'OFF' WHERE NAME = 'A1113A';\nUPDATE IP_ANALOGDEF SET IP_REPOSITORY = 'REPOS1' WHERE NAME = 'A1113A';\nUPDATE IP_ANALOGDEF SET IP_ARCHIVING = 'ON' WHERE NAME = 'A1113A';\n\nWhere REPOS1 is the new Repository and A1113A is the Tag Name.\n\nNote: You can also use the statement WHERE NAME LIKE. For example WHERE NAME LIKE 'ATCL%' will do the change for all the tags that have a Name that starts with ATCL.", "keywords": "Repository", "reference": null}, {"id": "000077421", "problem_statement": "This Knowledge Base article provides steps to configure Aspen InfoPlus.21 Data and Tag Replication for V8.0 and above.", "solution": "Following are the steps to configure the Replication SUBSCRIBER (Central IP.21 Database) & PUBLISHER (Plant-Level IP.21) machines.\n PUBLISHER \u2013 Plant-Level IP21:\nBefore beginning this procedure, determine which Aspen InfoPlus.21 plant-based systems will replicate data and tag information onto the central Aspen InfoPlus.21 system.\nAlso, please note that replication leverages \u2018Microsoft Message Queuing (MSMQ)\u2019, a messaging protocol that allows applications to communicate in a failsafe manner. Please make sure the MSMQ is installed and the MSMQ service is running on the Publisher and Subscriber machines.\nOn each of the plant Aspen InfoPlus.21 systems, you must:\n1. Load the Replication.rld file. Please follow KB # 77937 on loading the record using RLD file.\n2. Create a RepSubscriberDef record and configure the IP_HOST with the name of the subscriber system.\n3. Add the TSK_PUBR task.\n4. Set the RepSubscriberDef record's Active SW to on.\n5. Start the TSK_PUBR task.\n6. Enable tags to be replicated.\n7. Enable database replication.\n2. Creating a RepSubscriberDef Record\nCreate a subscriber record based on RepSubscriberDef on each publisher system (individual InfoPlus.21 system). The subscriber record will define the subscriber (central InfoPlus.21 system) as the IP_HOST.\n1. Create the record.\n2. Make the record unusable.\n3. Define the subscriber IP_HOST_NAME. This is the central InfoPlus.21 that the replicated data and tags will be published to.\n4. Make the record usable.\n3. Adding TSK_PUBR to the Individual InfoPlus.21 Systems\nThe individual Aspen InfoPlus.21 systems are the publishers of real-time process data and tag replication content to the centralized InfoPlus.21 system. For real-time process data and tag replication to take place, each of the individual plant level Aspen InfoPlus.21 systems must be running TSK_PUBR.\nWarning: This applies to the smaller, individual Aspen InfoPlus.21 plant systems. Do not perform this procedure on the larger, central Aspen InfoPlus.21 system.\n1. On each of the individual plant-level Aspen InfoPlus.21 systems, use the Adding a Task topic instructions in conjunction with the following information to create the TSK_PUBR task:\n Field\nSetting\nExternal Task\nSelected\nAuto restart\nSelected\nTask name (TSK_XXXX)\nTSK_PUBR\nSubSystem\nBase\nExecutable\n\\InfoPlus.21\\db21\\code\\ReplicationPublisherNG.exe\nOutput file\n\nError file\n\nWhen complete, the fields of the New Task Definition group box will contain the following settings:\n2. Click Add.\n3. Click START InfoPlus.21 to restart the Aspen InfoPlus.21 database.\n4. Turn on the RepSubscriberDef record\u2019s ACTIVE_SW\nUsing InfoPlus.21 Administrator, set the RepSubscriber Def record\u2019s ACTIVE_SW to ON.\n5. Start TSK_PUBR\nUsing InfoPlus.21 Manager, start the TSK_PUBR.\n6. Enabling Database Level Replication\n 7. Enabling Definition Level Replication\n 8. Enabling Tag Level Replication\n SUBSCRIBER - Central Database:\nFor data and tag replication, you must identify the central Aspen Infoplus.21 database that will subscribe to information that is replicated from the smaller individual plant-level Aspen InfoPlus.21 databases.\nOn the central Aspen InfoPlus.21 database:\n1. Load the Replication.rld file. Please follow KB# 77937 on loading the record using RLD file.\n2. Create a RepPublisherDef record for each publisher system and configure the IP_HOST with the name of the publisher system. Optionally, assign the IP_REPL_PREFIX.\n3. Add the TSK_SUBR task.\n4. Set the RepPublisherDef record's Active SW to on.\n5. Start the TSK_SUBR task.\n2. Creating a RepPublisherDef record\nCreate a publisher record based on RepPublisherDef on each subscriber system (typically, a single, central InfoPlus.21 system). The publisher record will define the publisher (individual InfoPlus.21 system) as the IP_HOST.\n1. Create the record.\n2. Make the record unusable.\n3. Define the publisher IP_HOST_NAME. This is the individual InfoPlus.21 system that will publish the replicated data and tags to the subscriber.\n4. Optionally, assign the IP_REPL_PREFIX. This is the prefix that identifies that publisher system and will be pre-pended to each record that is replicated from that publisher system.\n5. Make the record usable.\n3. Adding TSK_SUBR to the Central InfoPlus.21 System\nThe central Aspen InfoPlus.21 system is the subscriber of real-time data and tag replicated content that the individual or plant-level Aspen InfoPlus.21 systems contribute. For data and tag replication to take place, the central Aspen InfoPlus.21 system must be running TSK_SUBR.\nWarning: This applies to the central Aspen InfoPlus.21 system. Do not perform this procedure on the individual, plant-level Aspen InfoPlus.21 systems.\n1. On the central Aspen InfoPlus.21 system, use the Adding a Task topic instructions in conjunction with the following information to create the TSK_SUBR task:\nField\nSetting\nExternal Task\nSelected\nAuto restart\nSelected\nTask name (TSK_XXXX)\nTSK_SUBR\nSubSystem\nBase\nExecutable\n\\InfoPlus.21\\db21\\code\\ReplicationSubscriberNG.exe\nOutput file\n\nError file\n\n 2. Click Add.\n3. Click START InfoPlus.21 to restart the Aspen InfoPlus.21 database.\n4. Turn on the RepPublisherDef record\u2019s ACTIVE_SW\nUsing InfoPlus.21 Administrator, set the RepPublisher Def record\u2019s ACTIVE_SW to ON.\n5. Start TSK_SUBR\nUsing InfoPlus.21 Manager, start the TSK_SUBR.", "keywords": "Replication\nSUBSCRIBER\nPUBLISHER\nCentral", "reference": null}, {"id": "000099539", "problem_statement": "The need for creating a deployment package for installing Aspen Online and selecting a specific SQL Database instance for Aspen Online to use without manually intervening in that process.\n\nWhen running the Aspen Online Service with the Local System account, you'll need to run the selectinstance.bat to select a specific SQL Database instance for the application to use.\nYou can see more on this via the Knowledge Based article here: https://esupport.aspentech.com/S_Article?id=000094631\nThis is the manual way of doing so.\n\nThis KB article will go through on the scripting switches you can use with V12.1 Aspen Online to automate and deploy this to multiple clients without user intervention.", "solution": "Two arguments have been added to SelectInstance.bat to help automate for deployment packaging. When you run SelectInstance.bat, you can optionally add one of these arguments to the command to specify the SQL server instance, instead of selecting it in step 5. If you specify both, the first one specified will be used.\n/InstanceName:name allows you to specify the name of the SQL server instance\n/InstanceIndex:index allows you to specify the 1-based integer index of the SQL server instance. You must know the integer index if you have multiple SQL instances installed on the machine.\nExample:\nSelectInstance.bat /InstanceName:ASPENDB\nor\nSelectInstance.bat /InstanceIndex:1\n\nExample of how it would look in command prompt:", "keywords": null, "reference": null}, {"id": "000101531", "problem_statement": "This article describes how to create an animated gauge using the new AspenONE Process Explorer feature, Process View Authoring.", "solution": "Open AspenONE Process Explorer on the web browser of your preference and go to the Process View Authoring page.\nClick the plus sign (+) to create a new project and give it a new name on the Description field.\nIf you\u00b4d like to add a title for your dashboard, click on the text icon (A) under Basic Shapes. You may customize it using the Text Properties tab on the right.\nNext, click on the gauge symbol under the Control panel. Add a name for the gauge.\nGo to the Data tab and add the Data Source where the tag you\u00b4d like to view is located.\nType the tag next to the Name and select the attribute you\u00b4d like the gauge to work with. In this case, we\u00b4ll be using the tag\u00b4s value.\nGo to the Limits tab and select the Limit you\u00b4d like to view on the gauge (High, High-High, Low, Low-Low, etc).\nYou may change the color you\u00b4ll view if the tag value reaches the limit you chose on the previous step.\nType the name of the tag next to Name and select the attribute (it should match the one selected in step 7).\nIf you would like to add the value to the dashboard next to the gauge, click on the real data icon (0.2) under the Control panel. Give the object a name and customize it to your preference.\nOpen the Animation tab and enable the Animation On button.\nClick the plus sign, write a condition for the tag, and choose the Fg color.\nOpen the Data tab, select the Data Source, tag name, and attribute.\nSave your graphic clicking the Save icon on the upper right corner. Once this is done, go back to A1PE to view your newly created dashboard.\nNote: the values chosen for the conditional on step 12 were set according to what\u00b4s stablished for the High Limit value on Aspen InfoPlus.21.\n\n\n\nKey Words\nNew Features\nAspenONE Process Explorer\nA1PE\nProcess View Authoring\nAnimation", "keywords": null, "reference": null}, {"id": "000100066", "problem_statement": "When working on an APC server it can be hard to figure out which Emergency Patch has been installed.", "solution": "Version V14 and newer:\nStarting V14, there is a new program called aspenONE Manager which contains a \"Installed Updates/Patches\" section where you can see which Emergency Patch or Cumulative Patch has been installed, this program can be accessed from the Windows Start menu, under the Aspen Configuration Folder:\n\n\n\n\nVersion 12.0 and 12.1:\n\nThere is a solution proposed on KB article https://esupport.aspentech.com/S_Article?id=000099481 , this is an easy way to check if a Cumulative Patch has been installed but there is the disadvantage that the AspenTech Uninstaller doesn\u2019t show if an Emergency Patch has been installed.\n\nThe way to check if an Emergency Patch is currently installed is by opening the Registry Editor (either searching it on the Windows apps or running a command Win + R -> regedit) and navigating to the following path:\n\nComputer\\HKEY_LOCAL_MACHINE\\SOFTWARE\\WOW6432Node\\AspenTech\\Setup\\Products\n\nThere are several folders under Products, this is how they correspond to each APC product/server:\nFolder Name APC Product\nAPCBDV8 Desktop (DMC3 Builder)\nAPCONLINE Online\nAPCPFM Performance Monitor (Aspen Watch)\nAPCWEBSERVER Production Control Web Server\n\nIf you select any of these, the two keys that we are interested in are CurrentEPVersion and ProductVersion:\n\n\nCurrentEPVersion shows the number of the last Emergency Patch installed, ProductVersion shows if a Cumulative Patch has been installed, the third position indicates the CP number, 20.1.1.0 means CP1 has been installed.\n\nVersion 11 or earlier:\n\nAn alternative but more complex method of checking if an Emergency Patch has been installed is by opening a patch release notes and reviewing the Functional Areas Affected, then searching that specific .dll or .exe under C:\\Program Files (x86)\\AspenTech and reviewing the file details to see if the file version matches the version stated on the release notes.\n\nFor example, the EP14 for V11 CP1 release notes shows that applying this patch will update the ApcInfrastructure.dll file to version 19.0.1.5498:\n\n\nSo on a V11 CP1 Online Server machine we need to navigate to the file path that was previously mentioned and search ApcInfrastructure.dll, once we find it right click on it -> Properties -> Details and see if the File version matches the Release Notes:\n\n\nThe previous screenshot confirms that V11 CP1 EP14 has been installed on this machine.", "keywords": "APC, DMC, DMC3, EP, CP, emergency patch, cumulative patch", "reference": null}, {"id": "000097504", "problem_statement": "This document describes how to capture memory snapshots for finding memory leaks using Microsoft\u2019s User-Mode Dump Heap (UMDH) tool. \n\nWARNING !!! System reboot will be required. The process described in this document involves changes to the behavior of the Windows operating system which can lead to severe performance degradation of the program being monitored. It is critical that all steps be reversed (undone) after conducting the memory leak detection session.", "solution": "Step 1: Install the Windows Debugging Tools\nThe latest Windows SDK can be found using this link: https://developer.microsoft.com/en-us/windows/downloads/windows-sdk/. When installing you only need to choose the debugging tools option. \n\nNote: For Windows Server 2016/2019, please obtain Windows 10 SDK (the remainder of this article assumes Windows 10 SDK was selected and you installed into default folder)\n\nStep 2: Optionally, Learn About the UMDH Tool\nIf desired, you can learn about the UMDH tool by opening the WinDbg help file named debugger.chm from folder C:\\Program Files (x86)\\Windows Kits\\10\\Debuggers\\x64. Find the documentation for UMDH by using the Search panel to search for \u201cusing UMDH\u201d. More details can also be found under the \u201cTools Included in Debugging Tools for Windows\u201d section. \n\nStep 3: Set Environment Path to the Debug Tools Folder\nFrom the Control Panel, navigate to Control Panel\\System and Security\\System, then click Advanced system settings. Click Environment Variables, then from the System variables panel, find and select the variable named Path. Click Edit. Select the Variable value field, the move to the end and add a semicolon (;). Then enter (or paste) the folder path to the debugging tools, such as: C:\\Program Files (x86)\\Windows Kits\\10\\Debuggers\\x64 \nClick OK, three times to save the changes. \n\nNote: For Aspen 32-bit installs, specify the x86 folder, such as C:\\Program Files (x86)\\Windows Kits\\10\\Debuggers\\x86\n\nStep 4: Set Environment Variable for Symbol Path\nUse steps as above to return to the System variables panel. Add the following system variable:\n_NT_SYMBOL_PATH = c:\\appSymbols;srv*c:\\symCache*https://msdl.microsoft.com/download/symbols\nCreate the folder c:\\symCache. (downloaded symbols can be stored here)\nCreate the folder c:\\appSymbols (additional Aspen PDB files may be provided and can be copied into this location - note, symbols provided directly from the developers are better because they are the ones who will analyze the logs and it will help them to get the correct call stacks)\nFor more information, see: https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/preparing-to-use-umdh\n\nStep 5: Set Environment Variable to Disable BSTR Caching\nUse steps as above to return to the System variables panel. Add the following system variable:\nOANOCACHE = 1\n\nNote: The computer must be rebooted for these settings to take effect.\n\nStep 6: Unzip the AspenMemoryLeakCheck.zip File\nLog onto the computer that is suspected of having an application with a memory leak. Copy the AspenMemoryLeakCheck.zip file to a local disk folder. Unzip the file. \n\nStep 7: Stop the Problem Executable\nStop the executable program that is having problem. In this example, the CalcScheduler.exe program is to be stopped by using the Window Services control panel. Depending on the type of executable, you may have to use different techniques to stop it. For example, for an Aspen InfoPlus.21 external task you would use the Aspen InfoPlus.21 Manager to stop (and later re-start) it. \n\nStep 8: Run GFLAGS\nOpen an elevated security command prompt window (run as Administrator), then navigate to the folder where the AspenMemoryLeakCheck ZIP file was extracted. \nType GFLAGS from command line, then press ENTER key. The follow window will appear. \n\nIn the Global Flags dialog, select Image File tab and type application exe name, such as CalcScheduler.exe (followed by TAB key to refresh), then enable the \u201cCreate user mode stack trace database\u201d check box. Click Apply to save the changes. Click OK to close the window. \n\nStep 9: Start the Problem Executable\nFor example, if the problem application is a Windows background service, then use the Windows Services control panel to start the service. \n\nStep 10: Run Tests\nLet the executable run for enough time for the process to finish any first time initializations and to eventually stabilize. \n\nOnce stabilized, it is time to perform the first process memory snapshot command: \n\n_UMDH 1 \n\nObtain the Process ID of the problem executable, such as by using the Windows Task Manager. When prompted in the next step, specify that Process ID.\n\nLet the executable run for a while. Also perform tasks associated with the executable such that a variety of logic paths are exercised in the process. It is important to attempt to invoke logic in the executable that might contain a memory leak. Depending on the site conditions, you may need wait several minutes, hours, or even days. However, there is no need to wait too long such that an overabundance of leakage occurs. Once you are certain that a sufficient amount of memory has leaked (such as by viewing the Windows Tasks Manager), then run the second snapshot command: \n\n_UMDH 2 \n\nFinally, generate a comparison of the two files using this command: \n\n_Compare \n\nSend all the log files to your Aspen Support representative for analysis. \n\nStep 11: Cleanup\nIMPORTANT ! After the memory detection session is finished, it is critical to return the customer\u2019s system to its original operating condition. This means that all previous steps must be reversed. \n\n1. Note: This step is optional if you intend to continue using the GFLAGS tool in the future. From the Control Panel, navigate to Control Panel\\System and Security\\System, then click Advanced system settings. Click Environment Variables, then from the System variables panel, find and select the variable named Path. Click Edit. Select the Variable value field, then move to the end and remove the path that was added earlier in step 3. Delete variables that were added in step 4 and 5.\n2. From an elevated security command prompt, run the GFLAGS tool. In the Global Flags dialog, select Image File tab and type application exe name, such as CalcScheduler.exe, then clear (un-check) the \u201cCreate user mode stack trace database\u201d. Click Apply to save the changes. Click OK to close the window.\n3. It is important to verify cleanup steps just above. \n a. From an elevated security command window, launch the Windows Registry Editor. \n Type: regedit.exe \n b. Navigate to the key HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\Image File Execution Options, and ensure that the executable that was being checked no longer appears in the list of other executables. If it does, delete it from the list.\n c. Close regedit.\n4. If it was originally running, then restart the executable that was being checked.", "keywords": "diagnostic", "reference": null}, {"id": "000101557", "problem_statement": "After installing the Aspen Plus family V14 Cumulative Patch 1,document (.apw) files need to be re-run.\nAspen Process Definition Files (.appdf and .aprpdf) files no longer work.", "solution": "The V14 Cumulative Patch 1 makes changed to the recdef system definition files.\nThis means that the Aspen Process Definition Files (.appdf and .aprpdf) files will be different and will need to be regenerated if they are referenced in other products such as Aspen Custom Modeler (ACM) or Aspen Utilities Planner (AUP).\nIn addition, any user interface customizations will need to be redone and .apw files will need to be re-run.\n\nIn Aspen Plus, users will see a message when opening an .apw file: \n \n\nAspen Plus can handle this automatically. Other products do not automatically fix the problem; therefore, users need to regenerate APPDF or APRPDF using File Export in the updated version.", "keywords": null, "reference": null}, {"id": "000101530", "problem_statement": "The KB 000096371 shows how to use Aspen Plus to model solvent extraction. Could Aspen Custom Modeler models be used to replace the RStoic + SEP + design specification?", "solution": "Refer to the original KB for foundation of the modeling approach. The attached files show how ACM model export can be used to streamline the approach. \n\nThe model SXCell replicates the calculations of RStoic, SEP and design specification to let the aqueous/organic equilibrium to satisfy the partition coefficient, which is assumed to have the form logD = a + b * pH + c * [LH], where LH is the concentration of the extracting agent.\n\nFor convenience, the parameters a, b, c are stored in a structure, so the same values can be shared by several blocks.\n\nThe solvent extraction circuit is modelled in ACM. The SXcell model can be exported and used in Aspen Plus. This avoids the requirement to create multiple blocks (RSTOIC, SEP) and design specs. One could also use more complex expressions for the partition calculations.\n Model SXCell\n\n inlet_aq as input MoleFractionPort (description:\"aqueous phase inlet\");\n inlet_org as input MoleFractionPort (description:\"solvent phase inlet\");\n\n outlet_aq as output MoleFractionPort (description:\"aqueous phase outlet\");\n outlet_org as output MoleFractionPort (description:\"solvent phase outlet\");\n\n T as temperature (description:\"outlet temperature\", fixed);\n p as pressure (description:\"outlet pressure\", fixed);\n Q as enthflow (description:\"heat duty\", 0);\n\n z_aq_in(componentlist) as molefraction (description:\"global true composition inlet aqueous\");\n x_aq_in(componentlist) as molefraction (description:\"liquid true composition inlet aqueous\");\n s_aq_in(componentlist) as molefraction (description:\"solid true composition inlet aqueous\");\n sfrac_aq_in as fraction (description:\"solid ratio inlet aqueous\");\n rat_aq_in as ratio_ (description:\"conversion ratio true/apparent inlet aqueous\");\n x_aq(componentlist) as molefraction (description:\"outlet aquous phase composition\");\n x_org(componentlist) as molefraction (description:\"oulet organic phase composition\");\n F_aq(componentlist) as flow_mol (description:\"flow of component in outlet aqueous phase\");\n F_org(componentlist) as flow_mol (description:\"flow of component in outlet organic phase\");\n x_aq_apparent(componentlist) as molefraction (description:\"global true composition outlet aqueous\");\n x_aq_out(componentlist) as molefraction (description:\"liquid true composition outlet aqueous\");\n s_aq_out(componentlist) as molefraction (description:\"solid true composition outlet aqueous\");\n sfrac_aq_out as fraction (description:\"solid ratio outlet aqueous\");\n rat_aq_out as ratio_ (description:\"conversion ratio true/apparent outlet aqueous\");\n pH as ph (description:\"pH of the liquid phase\");\n rhol_aq as dens_mol_liq (description:\"molar density aqueous outlet\");\n rhol_org as dens_mol_liq (description:\"molar density organic outlet\");\n \n // extraction \"reactions\"\n reac as external SXReactions; // (\"Structures.COPPER\");\n useEquilibrium as YesNo (description:\"use equilibrium\", \"Yes\"); \n calc_equilibrium([1:reac.NR]) as YesNo (description:\"local selection for reaction\", \"Yes\");\n rate([1:reac.NR]) as realvariable (description:\"reaction rates\", fixed, 0);\n conc_aq_cation([1:reac.NR]) as conc_mole (description:\"concentration of key cation in outlet aqueous phase\");\n conc_org_LHcat([1:reac.NR]) as conc_mole (description:\"concentration of key LHcat in outlet organic phase\");\n conc_org_LH([1:reac.NR]) as conc_mole (description:\"concentration of LH in organic phase\");\n \n // calculate true composition of aqueous inlet\n call (z_aq_in, x_aq_in, s_aq_in, sfrac_aq_in, rat_aq_in) = pTrueCmp2 (T, P, inlet_aq.z);\n \n // mass balance aqueous phase\n for c in componentlist do\n F_aq(c) = inlet_aq.F * rat_aq_in * z_aq_in(c) + sigma(foreach(ir in [1:reac.NR]) reac.stoic_aq(ir, c) * rate(ir));\n endfor\n \n // mass balance organic phase \n for c in componentlist do\n F_org(c) = inlet_org.F * inlet_org.z(c) + sigma(foreach(ir in [1:reac.NR]) reac.stoic_org(ir, c) * rate(ir));\n endfor\n\n // heat balance\n outlet_org.F * outlet_org.h + outlet_aq.F * outlet_aq.h = inlet_org.F * inlet_org.h + inlet_aq.F * inlet_aq.h + Q;\n\n // calculate true composition for outlet aqueous phase\n sigma(F_aq) * x_aq_apparent = F_aq;\n call (outlet_aq.z, x_aq_out, s_aq_out, sfrac_aq_out, rat_aq_out) = pTrueCmp2 (T, P, x_aq_apparent);\n \n // organic phase (no chemistry)\n outlet_org.F = sigma(F_org);\n outlet_org.F * outlet_org.z = F_org;\n\n // calculate pH of outlet aqueous phase\n call (pH) = ppH (T, P, outlet_aq.z);\n\n // allows user to control if equilibrium is enforced (use \"no\" to help with first convergence)\n if useEquilibrium == \"Yes\" then\n calc_equilibrium : \"Yes\";\n else\n calc_equilibrium : \"No\";\n endif\n \n for i in [1:reac.NR] do\n if calc_equilibrium(i) == \"Yes\" then\n conc_org_LH(i) = rhol_org * outlet_org.z(reac.name_LH(i));\n conc_aq_cation(i) = rhol_aq * outlet_aq.z(reac.name_cation(i));\n conc_org_LHcat(i) = rhol_org * outlet_org.z(reac.name_LHcat(i));\n // let rate free to allow reaching equilibrium\n rate(i) : free;\n conc_org_LHcat(i) = 10^(reac.coef_const(i) + reac.coef_pH(i) * pH + reac.coef_LH * log10(conc_org_LH(i))) * conc_aq_cation(i);\n endif\n endfor\n\n // just to show we could use activities instead of concentrations if we dared...\n // gamma_aq(componentlist) as act_coeff_liq;\n // call (gamma_aq) = pAct_Coeff_Liq(T, p, outlet_aq.z);\n \n // aqueous outlet port properties\n outlet_aq.F = sigma(F_aq) * rat_aq_out;\n outlet_aq.T = T;\n outlet_aq.p = p;\n call (outlet_aq.h) = pEnth_mol_liq(outlet_aq.T, outlet_aq.p, outlet_aq.z);\n call (rhol_aq) = pDens_mol_liq(outlet_aq.T, outlet_aq.p, outlet_aq.z);\n outlet_aq.V = 1 / rhol_aq;\n\n // organic outlet port properties\n outlet_org.T = T;\n outlet_org.p = p;\n call (outlet_org.h) = pEnth_mol_liq(outlet_org.T, outlet_org.p, outlet_org.z);\n call (rhol_org) = pDens_mol_liq(outlet_org.T, outlet_org.p, outlet_org.z);\n outlet_org.V = 1 / rhol_org;\n\nEnd\n\n\nIn general, the MaterialStream stream type conveniently replicates the features of Material Streams of Aspen Plus, but requires the chemistry structure, which is only created when you export a simulation from Aspen Plus to Aspen Plus Dynamics when using true components. To work around this inconvenience, a custom stream type MoleFractionStream. has been created. The procedure pTrueCmp2 is used to convert from apparent components to true components.\n Stream MoleFractionStream\n inlet as input MoleFractionPort;\n outlet as output MoleFractionPort;\n\n F as output flow_mol_rev (Description:\"Total mole flow\", record:true);\n Fm as output flow_mass_rev (Description:\"Total mass flow\", record:true);\n Fv as output flow_vol_rev (Description:\"Total volume flow\");\n\n FR as input flow_mol_rev (Description:\"Specified total molar flow\"); \n FmR as input flow_mass_rev (Description:\"Specified total mass flow\");\n ZR(componentlist) as input molefraction_wide (Description:\"Specified mole fraction\", 1.0/size(componentlist));\n ZmR(componentlist) as input massfraction_wide (Description:\"Specified mass fraction\", 1.0/size(componentlist)); \n\n Zn(componentlist) as output molefraction (Description:\"Mole fraction\", 1.0/size(componentlist));\n Zmn(componentlist) as output massfraction (Description:\"Mass fraction\", 1.0/size(componentlist)); \n Fcn(componentlist) as output flow_mol_rev (Description:\"Component mole flow\");\n Fmcn(componentlist) as output flow_mass_rev (Description:\"Component mass flow\"); \n\n T as input,output temperature (Description:\"Temperature\", record:true);\n P as input,output pressure (Description:\"Pressure\", record:true);\n h as enth_mol (Description:\"Molar enthalpy\");\n MW as molweight (Description:\"Molar weight\");\n Rho as dens_mol (Description:\"Molar density\");\n Rhom as dens_mass (Description:\"Mass density\");\n hl as enth_mol_liq (Description:\"Liquid molar enthalpy\");\n Rhol as dens_mol_liq (Description:\"Liquid molar density\");\n MWl as molweight (Description:\"Liquid molar weight\");\n\n MWc(componentlist) as molweight (Description:\"Component molar weight\");\n\n // Equate inlet and outlet port variables\n outlet.F = inlet.F;\n outlet.T = inlet.T;\n outlet.P = inlet.P;\n outlet.Z = inlet.Z;\n outlet.h = inlet.h;\n outlet.V = inlet.V;\n\n // Equate internal variables and port variables \n F = inlet.F;\n P = inlet.P;\n T = inlet.T;\n h = inlet.h;\n Zn = inlet.Z;\n Fm = F * MW;\n Fv = F / rho;\n Fcn = F * Zn;\n Fmcn = Fm * Zmn;\n rho = 1 / inlet.V;\n \n call (MWc) = Pmolweights ();\n call (MW) = pmolweight (Zn);\n\n Rhom = Rho * MW;\n rho = rhol;\n Zmn * MW = Zn * Mwc;\n h = hl;\n\n specifyTrue as YesNo (description:\"specify composition as true components\", \"Yes\");\n Zan(componentlist) as molefraction (description:\"apparent mole fraction\");\n Zamn(componentlist) as massfraction (description:\"normalized apparent mass fraction\");\n ZamR(componentlist) as massfraction (description:\"specified apparent mass fraction\");\n xtrue(componentlist) as molefraction (description:\"liquid true composition\");\n strue(componentlist) as molefraction (description:\"solid true composition\");\n sfrac_true as fraction (description:\"solid ratio\");\n rat_true as ratio_ (description:\"conversion ratio true/apparent flow\");\n \n if not inlet.IsConnected then\n FmR : fixed;\n T : fixed;\n P : fixed;\n if specifyTrue == \"Yes\" then\n ZmR : fixed;\n Zmn * sigma(ZmR) = ZmR;\n else\n ZamR : fixed;\n Zamn * sigma(ZamR) = ZamR;\n Zamn * MW = Zan * Mwc;\n call (Zn, xtrue, strue, sfrac_true, rat_true) = pTrueCmp2 (T, P, Zan);\n \n endif\n \n call (hl) = pEnth_mol_liq(T, p, Zn);\n call (rhol) = pDens_mol_liq(T, p, Zn);\n Fm = FmR;\n endif\n\nEnd", "keywords": "pTrueCmp2, electrolytes, solvent extration, model export", "reference": null}, {"id": "000101482", "problem_statement": "How to use the Collection Server settings for the Auto Upload Tool and usage logs uploads.", "solution": "The Collection Server Option allows you to collect usage log files by transmitting them to a shared collection server from multiple SLM servers within your organization. The collection server will then be configured to transmit usage logs from these servers to AspenTech.\nThis article will provide instructions on how to configure a collection server and how to upload usage logs from license servers to the collection server. \nUsing a Machine as a Collection Server (Using Server Mode)\nOn the Custom Configuration (Step 2 of 2) screen, when you select either the HTTPS, SFTP, or Email option for the Upload Method, you can select the Use this machine as a collection server check box to enable this server to act as a Collection Server. Any log files it receives from other servers are sent to AspenTech along with any log files from this server. AUT uses the HTTPS, SFTP or Email option to send the collected log files to AspenTech. Server Mode supports two options:\nNetwork Share\nHTTP Server\nNetwork Share\nSelect the Network Share option to share a folder on this machine, so that AUT or other SLM Servers can deposit files into this shared folder.\nSpecify a Shared Folder Location. This will open a regular folder share dialog box. Share the folder and click permissions to grant the required users read/write permissions to the required set of users.\nAUT on other SLM Servers will now point to this shared folder location through UNC convention \\\\MyCollectionServer\\UsageUpload\\ and deposit files in this shared folder.\nHTTP Server\nThe HTTP Server option is enabled only if HTTP Server feature has been installed from the AUT installer.\n\nNote: The HTTP Server feature requires IIS be installed on the Collection Server. In addition, it also requires the HTTP Activation and Non-HTTP Activation features of Microsoft .NET Framework. The AUT installer will check for these dependencies if you select the HTTP Server feature.\nIn the Server Mode section, select the HTTP Server option.\nIn the Upload files to folder field, specify a directory. The collection server uses this directory to collect log files from AUT on other SLM Servers.\nSelect an authentication mechanism as appropriate. The user ID should have read/write access privilege to the directory specified above.\nUploading to a Collection Server (Client Mode)\n1. On the Custom Configuration (Step 2 of 2) screen, from the Upload Method drop-down list, select To Collection Server to configure the Auto Upload Tool to transmit usage log files from one server to another on your network in Client Mode. In the client mode, the AUT sends files to a Collection Server, whereas in the Server Mode, it accepts log files from AUT on other SLM Servers\nIn this mode, AUT transfers the log files from this server to the server specified. AUT should be installed and configured on the Collection Server (for example: MyCollectionServer or MyHttpServerName). AUT uses HTTPS, SFTP or Email option to send the collected log files to AspenTech.\n2. In the Client Mode section, select one of the following options:\no Network Share: If you select the Network Share option, perform the following tasks:\na. Specify the Shared folder location.\nb. Specify a user name and password with read-write access privilege to the shared folder on the Collection Server.\no HTTP Server: Use this option to transmit files through HTTP protocol to an internal Collection Server. This option is found to be firewall friendly in many user environments. HTTP Server feature of the AUT must be installed on the Collection Server (for example: MyHttpServerName) for this option to work. If you select the HTTP Server option, perform the following tasks:\na. Specify the HTTP Server name.\nb. Select the Enter User Credentials check box if you want to specify a user name and password. User name/password is optional, and depends on the authentication method specified on the HTTP Server end. 32 4 Post Installation Configuration\n3. Click Continue.\n4. Click Finish.\n Related Articles:\nAuto Upload Tool Configuration Settings", "keywords": null, "reference": null}, {"id": "000101525", "problem_statement": "Some customer wants to extract the HH (Hour), MM (Minute), and SS (Seconds) separately from the THISTM parameter for use in calculations. They only require the hour and minute values not the date for a particular calculation they perform.", "solution": "To extract the hour, minute, and second values from THISTM, you can use the LSTDAY and LSTSEC parameters to perform the necessary calculations.\n\n\n LSTDAY: The LSTDAY parameter represents the day for which the calculations are being performed.\nLSTSEC: The LSTSEC parameter represents the integer number of seconds since midnight of the day specified by LSTDAY. It provides the time information in seconds.\n\nSo, by retrieving the values LSTSEC, you can calculate the number of hours, minutes, and seconds that have passed since midnight (LSTDAY) using the LSTSEC value.", "keywords": "DMCplus, Calculation, THISTM, LASTDAY, LSTSEC, Time", "reference": null}, {"id": "000101296", "problem_statement": "This knowledge base article illustrates how to Configure CIM-IO Test API Software to Run as an Administrator", "solution": "The CIM-IO Test API software is a valuable tool for testing and troubleshooting industrial control systems. However, sometimes users may encounter issues with the software prompting them to log in as an administrator. This can be due to insufficient privileges or permissions on the user's machine. To help resolve this issue, we recommend checking the application's configuration settings to ensure that the necessary privileges and permissions have been granted.\nTo configure the CIM-IO Test API software to run as an administrator, please follow the steps below:\nGo to the directory where the software is installed.\nRight-click on the application executable and select \"Properties\".\nIn the \"Properties\" window, check the \"Compatibility\" tab.\nUnder \"Settings\", ensure that the \"Run this program as an administrator\" box is checked.\nClick \"Apply\" and then \"OK\" to save the changes.\n\n\nBy following these steps, you can configure the CIM-IO Test API software to run as an administrator, which should resolve any issues related to insufficient privileges or permissions. It is important to note that these steps may vary slightly depending on the version of the software and the operating system being used.\n\nIf you continue to experience issues with the CIM-IO Test API software, we recommend contacting technical support for further assistance.", "keywords": "CIM-IO Test API, Administrator, privileges", "reference": null}, {"id": "000101358", "problem_statement": "Aspen Production Control Web Server provides users with options to select how the final calculation result for the current custom calculation will be obtained and displayed in a KPI Plot. With multiple options available, users may face difficulty in choosing the appropriate option that meets their specific reporting needs.", "solution": "Aspen Production Control Web Server offers six calculation result options that users can choose from when determining how the final calculation result for the current custom calculation is obtained and displayed in a KPI Plot cell.\nThese options are as follows:\nAverage - This is the default option that displays the average of all calculation results that occurred within the specified aggregate type and time range. Users can choose from Current Averages, Hourly, Daily, and Monthly aggregate type and time range options.\nStandard Deviation - This option displays the standard deviation of all calculation results that occurred within the specified aggregate type and time range.\nMaximum - This option displays the maximum value of all calculation results that occurred within the specified aggregate type and time range.\nMinimum - This option displays the minimum value of all calculation results that occurred within the specified aggregate type and time range.\nAccumulate - This option counts all calculation results that occurred within the specified aggregate type and time range. It is useful for reporting the number of times a true-or-false result occurs within the specified aggregate type and time range.\nSummation - This option displays the sum of all values for a particular variable category. Users can select from CVs, MVs, FFs, or MVs+FFs variable categories using the Var Group list. The Summation option is only available if the Scope is set to \"General.\"\n\n\nUsers can choose from Average, Standard Deviation, Maximum, Minimum, Accumulate, and Summation methods, depending on their specific reporting needs. By understanding these options, users can effectively analyze and report their data using Aspen Production Control Web Server.", "keywords": "Aspen watch, Custom Calculation, Methods, KPI plot, Average, Standard Deviation, Maximum, Minimum, Accumulate, and Summation", "reference": null}, {"id": "000101357", "problem_statement": "The performance of the APC Performance Monitor on Windows can be impacted by several factors, such as inconsistent network time protocols, enabled screen savers, disk fragmentation, virus scanner interference, and enabled tracing in Data Sources (ODBC). These factors can degrade the server performance, negatively affecting the Aspen InfoPlus.21 database, and the APC Performance Monitor products hosted on Aspen Watch or Aspen RTO Watch servers.", "solution": "Implement a Network Time Protocol (NTP)\nEnsure all computers on the network use a consistent Network Time Protocol (NTP) to accurately obtain, analyze, and process data with consistent timestamps. Use the Windows Time service (W32tm.exe), which is included with the operating system, to accomplish this.\nDisable Screen Saver\nDisable the screen saver on the server while running Aspen InfoPlus.21 to save CPU cycles and enhance server performance.\nDefragment Disks\nDefragment the disks periodically after shutting down Aspen InfoPlus.21 to maintain optimal disk performance. Avoid attempting disk defragmentation while Aspen InfoPlus.21 is running.\nConfigure Virus Scanners\nConfigure virus scanners to exclude scanning the following directories:\nH21 and subdirectories that contain history repositories. For details, see Changing the InfoPlus.21 Historian Repository Locations.\nPROGRAMDATA\\app and subdirectories that contain application files. For details, see Copy Online Configuration Files and Model Files to the Server.\nOverview of Firewall, Antivirus and Windows permission requirements for Manufacturing Execution Systems Products\nAspenTech: Knowledge Base\n Disable Tracing in Data Sources (ODBC)\nEnsure that Tracing in Data Sources (ODBC) is disabled by following these steps:\nAccess the Windows Control Panel.\nFrom Control Panel, access Administrative Tools.\nFrom Administrative Tools, access Data Sources (ODBC).\nIn the ODBC Data Source Administrator dialog box, click the Tracing tab.\nVerify that the left When to trace button displays Start Tracing Now. If the When to trace button displays Stop Tracing Now, it indicates that Tracing is currently enabled. Click the button to disable Tracing.", "keywords": "APC Performance Monitor, Aspen RTO Watch, Aspen InfoPlus.21, CPU, configuration, Tracing, Defragment, Anti-Virus scanning", "reference": null}, {"id": "000101389", "problem_statement": "The APC DMC Controller has a low limit of -9999 for variables. However, this value is considered a \"bad value\" due to historical reasons, as it signifies an out-of-range value or a connectivity issue between the instrument and the DCS. This limitation poses challenges when dealing with instruments that can measure different flow magnitudes.", "solution": "To overcome the limitation of the low limit of -9999 and prevent the variable from being marked as bad, the following steps can be taken:\nScale the Variable: Try scaling the value of the variable so that it falls within the range of -9997 or higher. For example, you can divide the value by 1000 and use K-units instead of regular units for the variable.\n Maintain Consistent Scaling: It is important to ensure that the scaling applied does not have a wide magnitude variation between variables. This is because the SS Optimizer performs better when variables have similar scaling ranges. Adjust the scaling factor accordingly to maintain consistency across variables.\n\nBy scaling the variable, you can bring it within a valid range that avoids triggering the bad value check. This allows the variable to be considered valid and included in the calculation process.", "keywords": "APC DMC Controller, Low Limit, Bad value, limitation, scale, -9999", "reference": null}, {"id": "000101390", "problem_statement": "When running Aspen Watch Maker and attempting to execute the Install Database Configuration, some users are experiencing crashes. This issue prevents the successful installation of the Database Configuration, causing frustration and hindering workflow.", "solution": "To overcome the crashing issue during the execution of the Install Database Configuration in Aspen Watch Maker, a workaround has been identified. Follow the steps below:\nOpen the Command Prompt (cmd) as an administrator.\nTo do this, right-click on the Command Prompt icon and select \"Run as administrator.\"\nNavigate to the following directory:\n\nC:\\ProgramData\\AspenTech\\APC\\Performance Monitor\\etc\\cfg\nRun the InstallDBConfig.bat file.\nThis can be done by typing InstallDBConfig.bat in the Command Prompt and pressing Enter.\n\n\nBy following this workaround, the Command Prompt and Aspen Watch Maker will not crash, allowing for the successful completion of the Database Configuration installation.", "keywords": "Aspen Watch Maker, Install Database Configuration, crashing, IDBC", "reference": null}, {"id": "000101391", "problem_statement": "This Knowledge Base article aims to provide insights into the user\u2019s utilization of Aspen One Process Explorer and address common questions regarding user access.", "solution": "Regarding the User\u2019s utilization of Aspen One Process Explorer in IP.21, the number of tokens consumed depends on the Maximum Point Count that is set, each 4000 points equal to 1 token.\nTo check your IP.21 database maximum point count navigate to the following:\nOpen the Infoplus.21 administrator.\nExpand the Infoplus.21 node on the left.\nRight-click on your server's name and select \"Properties.\"\nNavigate to the \"Record Utilization\" tab.\nAt the bottom of the tab, you will find the total number of points under \"License points.\"\n\n\n\n\n\nIn Aspen One Process Explorer, each token allows up to 3 users to connect to IP.21. If a 4th user wants to use Aspen Process Explorer, it will consume an additional token. Token usage and can be monitored by follow these steps:\nOpen Aspen SLM License manager.\nLook for the \"Token Used\" column, which displays the token usage information.\nIn WLMAdmin, you can find the token license feature \"SLM_RN_PME_PRCEXPL_TK\" and observe the number of tokens being consumed.\nEnsure that users' machines are configured to obtain the necessary license from the SLM server.\nUse the SLM License Manager to export a report containing user login/access details under User tab then Export to Excel\n\n\nThe Usage log report provides desired user login/access details and usage information.\nConfigure the Auto Upload Tool (AUT) on the SLM Server to submit logs for generating the Usage log report.\n\nFor instructions on configuring the Auto Upload Tool (AUT), refer to the following KB article video tutorial: AspenTech: Knowledge Base\nNote; Once the logs are uploaded, you can request access to the usage reports from the support website.\n\nSoftware License Manager (SLM) V12 Token Usage Logs and Reports User's Guide\nAspenTech: Knowledge Base", "keywords": "IP.21, Utilization, User access, Aspen One Process Explorer", "reference": null}, {"id": "000101500", "problem_statement": "Many customers want to implement animation on a text box object in Aspen Process Graphic Studio based on multiple conditions.", "solution": "To achieve this animation effect, follow the steps below:\n Open Aspen Process Graphic Studio and select the text box object you want to animate.\nEnable animation for the text box:\n\n Go to the Animation tab in the properties panel.\nCheck the \"Animation On\" checkbox to enable animation for the text box.\n 3. Define the conditions for the animation:\nSwitch to the Data tab in the properties panel.\nIn the Name field, enter the following ad hoc calculation:\n=IF (Analog1 > 30) OR (Analog2 > 29) OR (Analog3 > 31) THEN 1 ELSE 2\n Replace Analog1, Analog2, and Analog3 with the names of your tags or variables. This calculation will evaluate the conditions and return either 1 or 2 based on the result.\n 4. Configure the animation conditions and colors:\n\n Switch back to the Animation tab in the properties panel.\nUnder \"Conditions\", click the \"+\" button to add a new condition.\nSet the condition to ==1 (equals 1).\nChoose the desired animation effect, such as changing the font color to red.\nClick the \"+\" button again to add another condition.\nSet this condition to ==2 (equals 2).\nChoose the desired animation effect for this condition, such as keeping the font color black.\n 5. Save and apply the changes:\nSave your changes in Aspen Process Graphic Studio.\nApply the modified graphic to your process visualization.\n\nNow, when any of the three tags (Analog1, Analog2, or Analog3) meet the specified conditions, the text font color will be red. Otherwise, the font color will remain black.", "keywords": "Aspen Process Graphic Studio, Condition, Multiple, Animation", "reference": null}, {"id": "000099913", "problem_statement": "If users are upgrading an older release simulation to Aspen HYSYS V12.1 release, is quite likely that they'll face a message standing incompatibility between the RefProp property package and Hydrogen-Para/Ortho-Hydrogen components. Such a message appears when they attempt to move from the Properties Environment to the Simulation environment.\n \nThe above incompatibility message shouldn't appear since such Aspen Properties components match with the RefProp Aspen Properties package.", "solution": "Users must delete the current hydrogen compounds and replace them with H2-N3 for Ortho-Hydrogen and H2-PARA for Para-Hydrogen. These components are available in the Aspen Properties Databanks.\n \nFixed in version\n\nDefect 784746: Ortho/Para-Hydrogen is incompatible with the RefProp fluid package when upgrading to V12.1\nTargeted for Aspen HYSYS V14.0\n KeyWords\n\nUpgrading file, Incompatibility package, dihydrogen molecules, Aspen Properties, Component list", "keywords": null, "reference": null}, {"id": "000101347", "problem_statement": "How to use the Miller Charts method for pressure drop calculation in a Bend?", "solution": "Miller Charts method is not available for Bends in Aspen Hydraulics. Nevertheless, using a Tee block (which supports Miller Charts) and ignoring one of its tee branches can be a workaround.\n\nIn the image below the TEE-101 works as an elbow because PIPE-104 is ignored.\n \nNote: We can not set zero flow in one of the arms of the TEE 101 because Hydraulics does not support having outlet flows as a constraint.", "keywords": "Miller Charts, Elbow, Branches, Zero Flow, Tee.", "reference": null}, {"id": "000101514", "problem_statement": "When accessing the AspenTech Process Data REST Service Sample Pages and using the SQL link, a query which includes a tag name in the results may not show the whole tag name. Here is the URL, some example screenshots, and a sample query:\n\nURL if used on the server itself\nhttp://localhost/aspentech/ProcessData/Samples/Sample_Home.html\n\nURL if accessed remotely\nhttp:///aspentech/ProcessData/Samples/Sample_Home.html\n\nScreenshot of home page:\n\n\n\n\n\n\nScreenshot using default select statement where query works okay (tag names are 24 characters or less) :\n\n\n\nHere's a screenshot of another query where one of the tags returned does not show the entire name (only first 24 characters are shown instead of entire tag name):\n\n \nHere is the tag that is being requested whose complete name is not being shown (the name is 65 characters long):\n\n\n\n\nQuestion\nWhat can be done in this situation to get the entirety of the tag name to appear in the results?", "solution": "The solution is to add a \"width nn\" command after the name field is requested. The nn would be equal to or greater than the longest tag name being requested. In this example the tag is 65 characters long and 65 is used for the width command (but IP_AnalogDef tags could be up to 256 characters long so a number up to 256 could be used).\n\nHere's the adjusted query followed by a screenshot:\n Select name width 65, ip_description, ip_input_value from ip_analogdef where name like 'DA%'", "keywords": null, "reference": null}, {"id": "000101513", "problem_statement": "I have a block in my simulation created with model using domain and distribution, such as this one:\n model pde\n x as domain;\n q as distribution1d (xdomain is x);\n for i in x.interior + x.endnode do\n $q(i) = -q(i).ddx;\n endfor\n q(0) = sin(3*time);\nend\n\n\n\nWhen I go to Tools, Variable Find... the variables q.ddx don't show up.", "solution": "The distribution partial derivatives are not complete variable objects (to save memory). You can see already they are not like classic variables, as they don't have a \"spec\" attribute. So the trick is to uncheck all boxes \"Free\", \"Fixed\", \"Initial\", \"Rateinitial\" as the partial derivatives don't have such a spec. Also uncheck Parameters.\n\n\n\nWhile one probably does not need to find such variables every day, there's one use case that might be relevant. Sometimes you want to reset the variables to the default (just to tidy up, or because the previous run didn't converge, etc.). In the Run menu, there's an \"Advanced Reset\" which can sometimes be useful. So if you want to reset all partial derivatives (ddx, d2dx2, etc.), you need to uncheck all specification options.", "keywords": null, "reference": null}, {"id": "000083527", "problem_statement": "The definition of property sets for liquid-liquid equilibrium properties require careful definition by the user. In particular, the parameter KLL2 (Liquid-liquid K-value), and the mixture molar volumes are considered, as they are required for the calculation of the commonly reported \u2018partition coefficient\u2019 (check Articles ID 85570 and 85567).", "solution": "The partition coefficient is calculated from the distribution coefficient KLL2. Please see solution 85570 for details on this property. While KLL2 is based on activity coefficients (and through the equilibrium condition on molar fractions),\n (1)\nthe partition coefficient is usually defined based on molar concentrations:\n (2)\nParticularly, values of are often reported in the literature. Since these values usually refer to very dilute systems, a relationship with phase molar volumes can be used:\n (3)\nThe parameters required to calculate the partition coefficient can be accessed in Aspen Plus streams as described below.\n 1. Create a property set (e.g. PS-1), Property Sets folder in the navigation pane.\n2. Include the parameter KLL2 in the Properties menu. In the \u2018Qualifiers\u2019 tab, choose the solute i as the component. Leave other fields empty. Since we are using the Calculator block later on, it is easier to define one Property Set for each individual property.\n3. Create a second property set (e.g. PS-2), add the parameter VMX. In the \u2018Qualifiers\u2019 tab, select \u20181 st Liquid\u2019 as Phase (other fields empty).\n4. Create a third property set (e.g. PS-3), with Qualifier \u20182 nd Liquid\u2019 as phase. Other fields can be left at default/empty status, and we will accept the default second liquid phase definition (densest phase).\n5. Create desired stream(s), define conditions and composition in the Mixed tab. In the Flash Options tab, choose \u2018Vapor-Liquid-Liquid\u2019 as \u2018Valid Phases\u2019.\n6. To display these properties on the Stream results, please follow the procedure given in Solution 85570.\n7. To perform the calculation given by Eq.(3), a Calculator block can be used. Add it to your simulation file, with the following variable definition:\n8. In the \u2018Calculate\u2019 tab, introduce the following statement: F PARTC=DISTRIBC*VMX2/VMX1.\n9. Run the simulation and access Results for the calculator block.\nTo find the partition coefficient perform the calculation shown above (for example, using the Calculator block) and exemplified below.\n Calculation example\nThe distribution coefficient for lactic acid in octanol-water system (from a stream with solute mole fraction of 0.1%) is predicted by the UNIQUAC thermodynamic method as being . Using the molar volumes for the mixtures in both phases (see equation above, where is the molar volume for the organic phase while refers to aqueous phase), the partition coefficient is calculated as\nIt can be reported as . The Results tab for the Calculator is shown as below:\nUsing the NRTL method, . Both results are in agreement with reported values in the literature. For benzene in octanol-water system, KLL2 is 692 and 716, using NRTL and UNIQUAC, respectively. This corresponds to , which is very close to available experimental data.", "keywords": "Calculator, KLL2, Partition Coefficient, Liquid-Liquid equilibrium, Properties, Thermodynamics, Solubility", "reference": null}, {"id": "000101269", "problem_statement": "What does Mat_cost.dat file contain in Aspen OptiPlant 3D layout?", "solution": "Column 1, Material Name: This is the name of the material which is listed in the mat_prop.dat file.\nColumn 2, Cost: This is the relative cost factor for this material.", "keywords": "Mat_cost.dat, Aspen OptiPlant 3D layout", "reference": null}, {"id": "000101231", "problem_statement": "What does Compinfo.[ans, bio, din] file contain.", "solution": "All Tables:\nComponent: The component type.\nSubcomponent: The component sub-type. If there is no 'subcomponent' the component is followed by data.\nTables 1-6:\nFirst Column: Nominal Diameter (mm)\nSecond Column: Valve Stem Diameter (mm)\nThird Column: Valve Stem Length (mm)\nFourth Column: Valve Run Length (mm)\nFifth Column: Valve Weight (kg)\nTable 7:\nFirst Column: Nominal Diameter (mm)\nSecond Column: REDUCER DIA IN DIRN TRANSVERSE TO PIPE (mm)\nThird Column: VERTICAL DIA OF REDUCER (mm)\nFourth Column: LENGTH OF THE REDUCER ALONG THE PIPE (mm)\nFifth Column: REDUCER WEIGHT (kg)\nTable 8:\nFirst Column: Nominal Diameter (mm)\nSecond Column: Flange Diameter in Direction Transverse to the Pipe (mm)\nThird Column: Vertical Diameter of Flange (mm)\nFourth Column: Length of Flange along the Pipe (mm)\nFifth Column: Flange Weight (kg)\nTable 9:\nFirst Column: Nominal Diameter on One End of Tee Main Run (mm)\nSecond Column: Nominal Diameter on Other End of Tee Main Run (mm)\nThird Column: Branch Nominal Diameter (mm)\nFourth Column: Branch Length (Main Run Length = Twice Branch Length) (mm)\nFifth Column: Tee Weight (kg)\nTable 10:\nThis table is currently not used by the router.", "keywords": "Compinfo.[ans, bio, jis, din]", "reference": null}, {"id": "000101236", "problem_statement": "What does OptiPlant Compclr.dat file contain?", "solution": "First Row, UPSTREAM: Specifies the upstream component type and subtype.\nOther Rows, DOWNSTREAM: Specifies the adjacent downstream component for the previously specified upstream component.\nColumn 2, X Nom Dia of Pipe: Will use a multiple of nominal diameter to determine the distance between components.\nColumn 3, Distance in mm: Will list the distance between the components in mm.", "keywords": "Compclr.dat", "reference": null}, {"id": "000101235", "problem_statement": "What Does Equipmentspacing_Units.csv contain?", "solution": "Plot-Plan Layout Rules or Equipment Layout Rules enable you to check for the spacing between any objects modeled in the program for the confirmation against their project\u2019s safety and maintenance standards. OptiPlant checks for the spacing between the equipment and structures placed in the plot plan as per the PIP standard values for Equipment spacing. These PIP standard spacing rules are provided in the EquipSpacing_Units.csv file.\nNote: These standard values must be double-checked depending on the project requirements.\nAs soon as a new model is started/opened, an excel file with the name EquipSpacing_Units is automatically placed inside the project folder. The excel file has all the details of the categories and minimum spacing values.\n\n You can add new categories or edit existing one's categories and change the minimum spacing distance as per the requirements anytime during the project. If you assign the category to any equipment, it will automatically get updated in the excel file in equipment id column, show an example of equipment with category assigned.", "keywords": "Equipmentspacing_Units.csv", "reference": null}, {"id": "000101240", "problem_statement": "What does Flange.[ans, bio, jis, din] file contain?", "solution": "First Row: Lists Flange Ratings.\nFirst Column: Lists Nominal Diameter (mm).", "keywords": "Flange.[ans, bio, jis, din]", "reference": null}, {"id": "000101227", "problem_statement": "What does the Branch_.dat file contain?", "solution": "Column 1, Nom Dia of Header: This lists the nominal diameter of the header line in inches.\nColumn 2, Nom Dia of Branch: This lists the nominal diameter of the branch line in inches.\nColumn 3, Preferred Branch Code: This lists the branch code to be used by PDS to determine the type of connection for the sizes indicated in columns 1 and 2. The following is a list of the branch codes used in column 3:\n6Q3C22 ---> Tee\n6Q3C24 ---> Reducing branch tee\n6Q3C73 ---> Weldlet\n6Q3C76 ---> Nippolet\n6Q3C82 ---> Reinforcing weld\nColumn 4, Connect Point: This determines the PDS connect point at the type of tee.", "keywords": "Branch_Spec.dat", "reference": null}, {"id": "000101271", "problem_statement": "What does FoundationData.csv file contain in Aspen OptiPlant 3D layout?", "solution": "This file contains the below information:\nTable 1: This table gives the information about different soil bearing capacities based on class of soil.\nFirst column shows the Class of Soil\nSecond column shows the Soil bearing capacities in ksf.\n\nTable 2: This table gives the information about foundation extension for round bottom equipment\nFirst column shows the Extension type.\nSecond column shows the Minimum length of extension around round bottom equipment in inch.\nThird column shows the Maximun length of extension around round bottom equipment in inch.\n\nTable 3: This table gives the information about foundation extension for flat bottom equipment.\nFirst column shows the Extension type.\nSecond column shows the Minimum length of extension around flat bottom equipment in inch.\nThird column shows the Maximun length of extension around flat bottom equipment in inch\n\nTable 4: This table gives the information about Load Range on Footing/Foundation Q and corresponding Minimum D.\nFirst column shows the Load range on footing/foundation in Kipf.\nSecond column shows the Minimum embedment depth of Footing/foundation in feet.\n\nTable 5: This table gives the information about Elevation above the Frame Building/Pipe rack up to which Equipment should be considered for Weight calculation.\nFirst column shows the Type of Equipment.\nSecond column shows the Elevation from top in feet.\n\nTable 6: This table gives the information about the variations of N60 with depth to calculate Average N60", "keywords": "FoundationData.csv, Aspen OptiPlant 3D layout", "reference": null}, {"id": "000101270", "problem_statement": "What does Mat_weig.ans file contain in Aspen OptiPlant 3D layout?", "solution": "This file contains the material weight information for bare pipe, pipe filled with water, pipe with insulation and pipe filling water and insulation, as per ANSI standard.\nThe material weight from this file is read for calculation the pipe weights during modularization.\nThe material weight from this file is read for calculation the pipe weights during foundation calculations.\nThe material weight from this file is read for calculation the pipe weights during member sizing calculations.", "keywords": "Mat_Weig.Ans, Aspen OptiPlant 3D layout", "reference": null}, {"id": "000087949", "problem_statement": "In the cold properties utility, why is the Flash Point value different when changing from the Aspen Pennsky-Martens to Aspen Tag method?", "solution": "Both the Aspen Pennsky-Martens and Aspen Tag methods calculate the flash point temperature using modified rigorous K-values and a closed flash algorithm. The solution algorithm for each is summarized below:\n 1. Aspen Pennsky-Martens and Aspen Tag methods calculate Flash Point via a Bubble Point flash.\n 2. To calculate the Bubble Point, modified K-values are estimated based on the following rules:\n - If a component is non-combustible then K value = 0\n- If a component is combustible then:\nK-value = K-value from Property Package * MW / KFACT,\nwhere KFACT = 1.03 for Pennsky-Martens and 1.30 for Aspen Tag\n- If a component is Methane then:\nK value = 700 / KFACT for Pennsky-Martens and 900 / KFACT for Aspen Tag\n 3. The converged value for the Bubble Point temperature is reported Flash Point.\n According to ASTM, the TAG method covers the determination of flash point of liquids with a viscosity below 5.5 cSt at 40C, or below 9.5 cSt at 25 C and a flash point below 93C. The Pennsky-Martens method is used for liquids with a viscosity of 5.5 cSt or higher at 40C, or a viscosity of 9.5 cSt or higher at 25C, or a liquid that may contain suspended solids or have a tendency to form a surface film under test conditions. In general, it is basically for heavier fluids than the TAG method. The flash point range for this method is is 40C to 360C.\n The references for these two methods are listed below:\n J.D. Seader and Ernest J. Henley, Separation Process Principles, p. 281, Wiley and Sons,1998.\n M.R. Riazi, private communication, 1985.\n M.R. Riazi, API Databook, 5th Ed., procedure 2B7.1 (1986).", "keywords": "cold properties, flash point, Pennsky-Martens, Tag", "reference": null}, {"id": "000101488", "problem_statement": "How to solve HTTP Error 500.19 - Internal Server Error in AUP?", "solution": "Following error accessing Unified\n\n\nAdd two new logins in SSMS with \u201csysadmin\u201d privileges.\u201d\n\n\n\nNavigate to the path below as administrator:\nC:\\Windows\\System32\\inetsrv\\config\nMake sure the highlighted section below has the \u201callow\u201d and \u201cdeny\u201d permissions set respectively.\n\n\nRestart Aspen Unified Service (stop & start)\n\n\nRestart IIS Service from IIS service manager (stop & start)\n\n\nReopen Unified", "keywords": "HTTP Error 500.19, AUP, error, Aspen Unified", "reference": null}, {"id": "000101487", "problem_statement": "How to troubleshoot an MBO model that doesn't converge?", "solution": "Check that all input properties required for derived property calculations have initial values in the Beginning Inventories table/dialog.\nIf the optimizer is not finding a solution and reporting infeasibilities in property balances (PBAL) try to re-run the optimization using a different BML correlation (e.g. trying using one of Aspen\u2019s ABML correlation instead of a custom client UBML correlation).\nIf the optimizer is not finding a solution and the objective function value is negative (and large) try minimizing the initial infeasibilities. You can do this by modifying Beginning Inventories values in those tanks that are below or above the working inventory limits. Then re-run the simulation to see if the convergence problem disappears.\nIf the optimizer is not finding a solution, click on \"Default\" button of the optimizer settings dialog to switch to recommended optimizer settings and see if the convergence problem disappears.\nIf the optimizer is taking too long to solve, try increasing the MIP_OPTCR to values as large as 0.10 and see if a quality solution can be obtained in reasonable time.", "keywords": "Convergence, MBO, troubleshooting", "reference": null}, {"id": "000101486", "problem_statement": "How to compare settings of 2 PIMS models?", "solution": "If you right-click Model Settings, then select Non-Default Settings on any PIMS model\n\nYou can compare this table between the models to check any model setting differences,", "keywords": "Model settings, compare models, non-default settings", "reference": null}, {"id": "000101485", "problem_statement": "How to set up 'Buy it or pay it' logic in PIMS? For example:\nI must buy at least 10 000 BPD of ANS\nIf I don\u2019t I need to pay 25% more for each unit I don\u2019t buy\nThen,\nIf I buy 10 000 BPD, I\u2019ll pay 10 000 * 1\nIf I buy 10 001 BPD, I\u2019ll pay 10 001 * 1\nIf I buy 9 999 BPD, I\u2019ll pay (9 999 * 1) + (1 * 1.25)", "solution": "First, we want to save the value missing to reach the 1000 m3. So, me want to make the difference between the section that doesn\u2019t pay a penalty and the one that does, hence we will create an ALTTAG. For reference I will add a penalty to the crude ANS of the Volume Sample if we buy less than 10.\n\nThen we buy it for the same cost, we add the minimum volume we can buy to avoid a penalty as the MAX for ANS, we keep the cost constant, and we define the priority as follows.\n\nThen, we create a variable that saves the max of 10 (7 characters of your liking). For best practice, reference to the max 10 value in table BUY.\n\nAnd we go to table ROWS. We need to create a row starting with E to show that it is equal to 0. And we will save the slack. An entry in the column 1 will create a variable with the same name of the equation that saves how much is missing for the equation to be equal to 0.\nPURCANS= 10 or smaller. \u201cSmall number\u201d\nLIMANSL= always equal to the MAX in table BUY for ANS, in this case, 10. \u201cBig number\u201d\nSLACK= adds a positive +1*EPURANS to the left side of the equation.\nHence, the equation EPURANS should be\n+1*EPURANS +1*PURCANS -1*LIMANSL=0\nNotice that +1*PURCANS (\u201csmall number\u201d) -1*LIMANSL (\u201cbig number\u201d) will always be 0 or a negative number, so EPURANS will always be 0 or positive and it saves the difference between the purchases of ANS and the MAX restriction.\nThen, we will buy that difference at the cost of penalty we need to pay. Hence, we write a UBAL and 3 characters of my liking (XXX) and we add a positive 1 for the column EPURANS we just created. Positive because the utility that we buy (-) can be consumed (+) here.\n\n\nAnd in table UTILBUY we set the cost. If it is 25% more of the original cost of ANS, we reference to the table BUY. Don\u2019t add a MIN or MAX here, control how much ANS you are buying through table BUY.", "keywords": "Advanced constraints, buy it or pay it, utilities to impose variable penalties", "reference": null}, {"id": "000101849", "problem_statement": "Sometimes when trying to open the Control Objectives of any MV or CV in the ATcontrol page from an operator machine from the DCS, a similar pop-up message to the one below can appear:\n\u201cThis operation has been cancelled due to restrictions in effect on this computer. Please contact your system administrator.\"", "solution": "To gain access to the Control Objectives, there are two options:\n1. Use the PCWS \u201cEnforce single-window environment\u201d option in the Preferences tab of PCWS.\nThis causes all popup windows for the PCWS web session to open within a floating frame within the browser window instead of opening popup windows.\n2. Override the group policy that prohibits popup windows in the console.\nMake sure the \u201cDisable Open in New Window menu\u201d option is disabled (not configured) from the Local Group Policy Editor, just as in the picture below.\nTo do so, please open the Group Policy Editor from your Windows search icon, then go to User Configuration> Administrative Templates> Windows Components> Internet Explorer> Browser Menu> Disable Open in New Window Menu Option.\nOnce this option is disabled, the message should disappear, and you should be able to access again to the control objectives.", "keywords": "PCWS, ATcontrol, control objective, web viewer", "reference": null}, {"id": "000101827", "problem_statement": "With the introduction of the updated AspenAPC webpage in v12.1, there is a new tab called 'Flowsheet'. This new feature allows for a view of the process that is different from the standard tabulated view.", "solution": "Attached is a video walkthrough of how to create and save a Flowsheet within the AspenAPC webpage.\n\nKEYWORDS\nAspenAPC, Flowsheet, v12.1 New Feature, graphics, v14, v14.2, PCWS, walkthrough, tutorial", "keywords": null, "reference": null}, {"id": "000101234", "problem_statement": "What does DefaultEquipmentCategories.csv file contain?", "solution": "Each equipment present in the library is assigned with some default category based on the type of equipment. You can change this category by selecting any of the categories present in the database. This default categories are read from the data file DefaultEquipmentCategories.csv present in the :Program Files (x86)\\AspenTech\\Aspen OptiPlant \\Data folder.\nIf you want to change the default categories for your projects, you can directly change this in DefaultEquipmentCategories.csv.\nDEFINITIONS:\nCOLUMN A, Equipment TYPE.\nCOLUMN B, Default Category.", "keywords": "DefaultEquipmentCategories.csv", "reference": null}, {"id": "000101473", "problem_statement": "Aspen Unified V14 post-installation configuration.", "solution": "This video guide will walk you through the default configuration steps for the V14 Aspen Unified SQL databases using the Aspen Unified Configuration Manager and Microsoft SQL Server Management Studio.\n\n\n\nFor additional information or questions about the SQL server and Specific Account Requirements, please refer to the attached Installation Guide.", "keywords": "AU, AUP, AUS, AURA, AU GDOT, databases, SQL server, configuring SQL, SSMS", "reference": null}, {"id": "000101479", "problem_statement": "Which SQL Server do I need to install to run Aspen OnLine?", "solution": "Aspen OnLine includes a service which performs some tasks. This includes accessing a database which is used to cache data retrieved from a historian. To use the Aspen Properties Enterprise Database with several Aspen Engineering products on a Windows Server operating system, an SQL server is required.\n Version SQL Server\nAOL V11/V12/V12.1 SQL Server 2014 Express\nAOL V14 SQL Server 2019 Express\n\nThe Aspen OnLine service, by default, runs under an account with Administrator access. It is possible to configure the service to use the Local System account instead, which will allow any user to use the service.\nRun the service as a user with local administrator privileges. The database will use the LocalDB built into Windows without any additional installation.\nLet the service run as Local System. The Local System user cannot use LocalDB, so you must install SQL Express 64-bit for this case. When you install Aspen OnLine, the version used is set to the earliest supported installed version. To change this, run Aspen OnLine Database Selection from the Windows Start menu.\nNote: If the Aspen OnLine service runs as any account other than yours, then it will not be able to run any projects stored in your user-specific folders (anything under C:\\Users\\ including the Desktop and Documents folders; doing so will lead to Access Denied errors. Avoid storing projects in these locations.\n\nTo confirm and if necessary, change the account which runs the Aspen OnLine service:\nUsing an account with administrative privileges, open the Control Panel. Search for Services and run it.\nIn the Name column, locate and click Aspen OnLine V11/V12/V12.1/V14.\nIn the Status column in this row, the word Started will appear if it is started. If it is already started, on the left side of the window click Stop to stop the service.\nIn the Log On As column in this row, Local System appears if the service is running as Local System. To change this, right-click anywhere in the row and click Properties.\nClick the Log On tab, and click This account, then either enter the account name or click Browse and search for an account. The account selected must have local administrator privileges.\nEnter the password for the account in the Password and Confirm Password boxes to confirm it and click OK.\nAt the left side of the window, click Start to start the service.\nIn the Startup Type column, the word Automatic should appear. If it does not, double click it and select Automatic or Automatic (Delayed Start) in the Startup type field in the dialog box that appears.\nNote: All versions of the service are mutually exclusive. You cannot start one while the another is running.", "keywords": "SQL Server, Version, AOL, Services, databases.", "reference": null}, {"id": "000101478", "problem_statement": "Best practices to converge an adjust block.", "solution": "The Adjust operation iteratively modifies the adjusted variable within the specified range until the calculated variable matches the target value within a specified tolerance. If you're having an issue with the range you've specified in an Adjust operation, it's usually either because the true value lies outside the range you've specified or because the range is so wide that the solver can't find a good solution.\n\nHere are some steps you can take to fix this:\n\n1. Check the Range: Ensure the range you specified includes the value you're looking for. If your Adjust operation is resulting in a negative flow, this could mean your lower limit should be set to zero or a small positive value instead of a negative value.\n\n2. Narrow Down the Range: If your range is too wide, the solver might struggle to find a solution. Try narrowing down the range. This may involve making an educated guess about what the value should be based on your knowledge of the system you're simulating.\n\n3. Start with Good Initial Guess: Provide an initial guess for the adjusted variable that is close to the expected value. This will guide the solver to the solution more easily.\n\n4. Modify the Adjustment Step Size: The step size of adjustment can affect convergence. If the step size is too large, the solver may overshoot the solution. If the step size is too small, the solver may take a long time to converge or may be trapped in local minimum. You can try to adjust this value for better results.\n\n5. Check the Convergence Criteria: If the criteria are too strict, it may lead to non-convergence. Relaxing the convergence criteria, a little bit can sometimes help, but you should also make sure that the results are still accurate enough for your purposes.\n\nRemember, each case is different and may require different adjustments, and the physical interpretation in the results concerns the user expertise.", "keywords": "adjust block, convergence, solver, tolerance, negative flow.", "reference": null}, {"id": "000081862", "problem_statement": "After using Retrieve Parameters from the Tools section of the Home ribbon or generating a Property report using the \"Property parameters' descriptions, equations and sources of data\" (PARAM) option on the Setup | Report Options | Property sheet, I do not see the expected parameter (for example, CPLDIP).\nWhy is it missing from the report?", "solution": "The Property report shows only the current parameters used in the Property Methods selected in the simulation.\nIn this case, no property methods use a liquid reference state; therefore, CPLDIP, the liquid heat capacity model parameter, is not used in the enthalpy calculations and does not appear in the property report. To correct this, select a property method such as WILS-LR or modify a property to use a liquid reference state.\n\nTo generate a report of all parameters in the databanks, on the Setup | Report Options | Property sheet, select \"All physical property parameters (in SI units)\" and export a report file.", "keywords": null, "reference": null}, {"id": "000081296", "problem_statement": "How is it possible to model liquid-liquid equilibrium for systems with electrolytes? Are there any tips for getting this to work?", "solution": "Liquid-liquid (LLE) systems can be modelled with ElecNRTL and ENRTL-RK, but there are several important steps that must be taken.\nIn earlier versions only the Apparent approach was supported; however, now LLE works with both the True and Apparent component approach. It is also possible to model LLE with electrolytes using an RGibbs reactor unit operation block.\nThe property WTRUE can be used to tabulate the true species mass flow rates. For a species such as NaOH that is normally dissociated in water, tabulating WTRUE should show that the ions are present in the water phase and that the molecular species are present in the organic phase that has a low dielectric constant.\n\nSteps:\nIf using apparent approach, the chemistry may not contain any complete dissociation reactions. Replace them with equilibrium reactions with a large dissociation constant. Usually A = 5.0 works well (A is the natural log of K).\nAfter making this change, molecular species such as NaOH or NaCl may exist in trace quantities in other parts of the flowsheet. You may have to specify some additional property parameters, such as DHFORM if the chemistry is used in an RStoic block. The heat of formation should have no quantitative effect on the simulation; it can be estimated or just set to zero.\n The activity coefficient parameters should predict phase splitting between the water and the organics in the absence of salt.\nThe interactions between water and the organic are controlled by the NRTL/1 and NRTL/2 parameters. These should be retrieved from the LLE-ASPEN databank, regressed to LLE data (in the absence of salts), or estimated using UNIF-LL.\n The solvents should have dielectric constant parameters (CPDIEC).\nWithout CPDIEC, the dielectric constant defaults to the dielectric constant of water. This is essentially always too big. For comparison, the dielectric constant of water at room temperature is about 80. Methanol has a dielectric constant of 32. Acetone and butanol are about 20. Phenol is 10. Non-polar organics have dielectric constants of near 2.\n Some databank compounds have CPDIEC parameters. The user should check whether these parameters exist for the solvents used in the simulation, and if they do not should supply parameters.\n Having too high a dielectric constant makes it too easy for ions to go into the organic phase and for organics to dissolve in the aqueous brine phase.\n Pair parameters should be supplied between the non-aqueous solvents and ion pairs.\nWhen pair parameters are missing from the databanks, ElecNRTL fills in default values. The defaults are:\n 10 for solvent with ions\n-2 for ions with solvent\n In general, these are too favorable (they are appropriate for a good solvent for ions, such as methanol). For ions that do not go into the organic phase (and for solvents that are salted-out), better values are:\n 5 for solvent with ions\n5 for ions with solvent\n These can be regressed if there is ternary (water/salt/solvent) LLE data available.\n The user may have to override databank values for electrolyte vapor pressures.\nWhen calculating phase equilibrium, Aspen equates fugacities. The vapor pressure of the components is used in this calculation.\n Salts and ions have vapor pressures set to zero by setting the first PLXANT parameter to -1D20. This is a flag that tells Aspen that the component is non-volatile.\n When computing liquid-liquid equilibrium, this low value of vapor pressure can cause numerical difficulties for some components such as NaOH. To get around this, PLXANT/1 should be set to a much higher value, such as -15. This still makes the vapor pressure very low (PLXANT/1 is the natural log of the vapor pressure in N/M^2), but it avoids the numerical difficulties that come from using -1D20.\n When the non-volatile flag is not set (when PLXANT/1 is not equal to -1D20), Aspen requires heat of vaporization for the compound. This parameter check can be passed by setting DHVLWT/1 to zero. (Because these compounds do not go into the vapor this has no effect on calculated enthalpies).\n Convergence Tips:\nIt may be necessary to increase the maximum number of flash iterations. This can be done globally on the Setup | Simulation Options | Flash Convergence sheet by increasing the Maximum iterations in the Flash options section.\nSimplify the chemistry to remove irrelevant reactions.\nIf phase splitting calculations in RadFrac column fail to converge, consider setting Maximum liquid-liquid phase split iterations in the inside loop (LL-MAXIT) to 0 on the RadFrac | Convergence | Algorithm sheet in order to completely suppress these calculations or setting the Liquid-liquid phase splitting method (LL-Meth) to the Equation solving method EQ-SOLVE on the RadFrac | Convergence | Basic sheet. Note that LL-Meth=EQ-SOLVE may yield an incorrect result, because it only equates the fugacities of the two phases, it does not require that Gibbs Free energy is at a minimum.\nIf a Reactor block is used, confirm that ionic reactions do not occur in the vapor phase.\n\nExamples:\n\nKnowledge Document 67242 - water (H2O), carbon tetrachloride (CCl4), and sodium hydroxide (NaOH) \n\nKnowledge Document 56816 - water (H2O), 1,2-dichloropropane (C3H6Cl2), sodium chloride (NaCl), and nitrogen (N2)\n KeyWords\nelectrolyte liquid-liquid equilibrium LLE ElecNRTL", "keywords": null, "reference": null}, {"id": "000101471", "problem_statement": "What is the Mass Density Distribution of Solids in the plot available for the Particle Size Distribution (PSD)?", "solution": "The Mass Density Distribution Plot shows the mass fraction of one size interval relative to the width of the size interval. For each interval, the Mass Density Distribution is plotted on the Y axis; thus, the units for the Y axis on this plot are the inverse the X axis units. Mass density plots show the most realistic representation of relative values of particle size. The tip of the plot describes the modal value of the distribution. A logarithmic representation of the mass density plot should only be used for analysis, but not for visualization of trends and values on the plot because the graph gives a misleading representation. This information can only be shown graphically and is not reported as a table.", "keywords": null, "reference": ": VSTS 955005"}, {"id": "000101470", "problem_statement": "In Column Analysis, how is the Interfacial area factor handled?\n\nFor example, when using the mass transfer correlation Brf-85 that uses the packing surface area as interfacial area, the ratio of interfacial surface area or specific interfacial surface area to packing surface area should be the same as the specified interfacial area factor. In version V11 and higher, it seems to be a linear interpolation between the first segment and the last second for the specified interfacial area factor and a factor of 1.\nThis problem was not in V.10 or older versions.", "solution": "In RadFrac blocks with Column Analysis, there is an additional parameter Interfacial bottom stage area factor on the Rate-based Setup | Sections sheet. The Interfacial area factor now applies to the top stage and the value of the factor is linearly interpolated between these values for stages between the top and bottom. This change was introduced in V12 and the bottom stage factor defaults to 1. To set a constant value for the section (and preserve the behavior of V11 and earlier), be sure to set both parameters to the same value. \n\nIf users want to have the same results as in V11 and earlier, on the Rate-based Setup | Sections sheet make \"Interfacial bottom stage area factor\" the same as \"Interfacial area factor\":", "keywords": null, "reference": ": VSTS 953055"}, {"id": "000096842", "problem_statement": "A connection between IP.21 and OPC UA server involves trusting each others' certificates (read, write and unsol). The vendor usually provides the location of their certificates.\nIf the vendor doesn\u2019t know where are these certificates located, please follow the next steps.", "solution": "1. On the IP.21 side, Run Notepad as administrator and open cimio_opcua_read.log4net.Config.xml that is located in C:\\Program Files (x86)\\AspenTech\\CIM-IO\\io\\cio_opc_uai \n\n\n 2. Enable \u201cDebugging logs\u201d\n\n3. Open CIMIO INTERFACE MANAGER and start the interface that is pointing to the target OPC UA Server.\n4. You need to create a logical device, you can do it by following the KB article How to add a logical device in Aspen InfoPlus.21 using the Cim-IO IP21 Connection Manager or by using CIMIO TEST API and following the next KB article How to use the CIM-IO Test API to add a logical device?\n5. Once the Logical device has been create, run Test API option 9, to try to see data from the OPC UA Server, this will reject the certificates an put them inside the \u201cRejectedFolder\u201d so it can be trusted.\n\n \n\nThe following error is shown after doing the Test API\nCIMIO_USR_GET_REPLY, Error came from GET reply packet CIMIO_OPCUA_DEVICE_ERROR, Device failure.\n\n6. On Local disk C | Program Data, three log files have been created\n \n\nThis means that the CIMIO has rejected the software certificates as expected and placed them inside C:\\ProgramData\\OPC Foundation\\RejectedCertificates\n\n7. Open OPC UA Configuration tool | On Manage Security, select cimio_opcua_read on Application to Manage \n \n8. Click on \u201cSelect Certificate to Trust\u201d, on Store Path point to c:\\ProgramData\\OPC Foundation\\RejectedCertificates\n\n9. Click on the certificate of target OPC UA server\n10. Click Ok \n\n 11. Repeat the step 8 for cimio_opcua_unsol and cimio_opcua_write\n12. Restart the interface, the read, write and unsol will turn green\n13. Run TEST API and select option 9, we should be able to bring data.", "keywords": "CIMIO \nOPC UA\nOPC Configuration Tool\nTest API\nCertificates\nRejected Certificates", "reference": null}, {"id": "000101467", "problem_statement": "What is the equation Aspen is using for calculating the PSV rated capacity?", "solution": "Rated flow is not expected to depend on mass flow. Mass flow will typically be the required flow for the particular scenario (and it's a user-specified value), whereas rated flow is the maximum that can flow through the valve based on the fluid, orifice area, and valve characteristics (it can be calculated or user-specified depending on the option in the source editor).\n\nMass flow is used for most parts of the network. Rated flow is used in lines flagged as tailpipes when the appropriate option is selected in the calculation settings.\n\nAspen Flarenet is using the equations in API 520 Part I for rated flow analysis. The specific equation depends on the fluid state and options selections, see the Methods tab.\n\nIn Aspen Flare System Analyzer the rated flow will be calculated from the Maximum Allowable Working Pressure (MAWP), valve type, orifice area, valve count, upstream pressure, upstream temperature and sizing method, the flowrate will be automatically updated after any change in these values.", "keywords": "MAWP, Rated Capacity, Sizing, Mass Flow", "reference": "https://esupport.aspentech.com/S_Article?id=000094792\nhttps://esupport.aspentech.com/S_Article?id=000097202"}, {"id": "000101294", "problem_statement": "What are the Best Practices for Migrating Aspen Utilities Planner Models to Newer Versions? With Specific Comments for Migrating to V12.x", "solution": "Installation\nIf possible, install the new version of Aspen Utilities Planner on a different machine from the one with the existing working models. Doing this will allow you to be able to run both versions on the same model to verify that the model upgrade has gone as planned.\n\nInstallation of patches\nAfter installing Aspen Utilities Planner from the Aspen Engineering Suite, ensure that you apply any Cumulative Patches and the most recent Emergence Patches. Several issues related to model library additions and enhancements have been made since the original release of the software versions.\n\nYou can find the most recent Patches by logging into the Aspen Tech Support Website (AspenTech Support Center: Login) then going to the Browse for Patches area. Select \u201cProcess Engineering\u201d for the Family and \u201cAspen Utilities Planner\u201d for the Product. Either let the Version be \u201cAll\u201d or choose V12.0 or V12.1, depending on which version you are using.\n\nClick on \u201cGo\u201d\n\n\nPatches will be listed with the most recent version patches at the top. Remember to look for both Cumulative Patches (CP) and Emergency Patches (EP). If both a CP and an EP exist, apply the CP first, then the EP. The patch will have more detailed instructions.\n\nGeneral Comments about version migration with Aspen Utilities Planner\nAn off-line AUP simulation is made of various files:\n Properties: *.appdf / *.aprpdf\nOptimization Custom Constraints: *.mos\nDatabases: *.mdf (or for older versions *.mdb)\nSimulation File: *.auf\nExcel Interface: *.xlsx/*.xlsm\n\nBelow, are the typical actions needed for each of these file type.\n\nProperties\n\nIf the Aspen Properties file is embedded in the simulation file, then no action is required.\n\nIf the Aspen Properties file is using *.appdf/*.aprpdf, then you need to open the corresponding Aspen Plus/Aspen Properties file *.bkp/*.aprbkp, run the calculations, then save as *.apw (Aspen Plus) or *.aprop document type. This will generate the new *.appdf/*.aprpdf file.\n\nDatabases\n\nNo action required if already using *.mdf file format.\n\nIf you're using Access databases, you can use the database converter (see on-line help in \"What's new in Aspen Utilities Planner\"). Start the program ATUDatabaseConverter.exe from C:\\Program Files\\AspenTech\\Aspen Utilities Planner ReleaseVersion\\bin\\ .\n\nOptimization Custom Constraints\n\nThere are two optimization files:\n1) CustomConstraints.mos\n2) AspenUtils.mos.\n\nFor CustomConstraints.mos, no changes are required.\n\nFor AspenUtils.mos if you are migrating from V10 to newer versions, we recommend to use AspenUtils.mos file of the corresponding new version, which is C:\\ProgramData\\AspenTech\\Aspen Utilities Planner ReleaseVersion\\Examples.\n\nIf you have customized the Aspen Utilities file AspenUtils.mos (which is not recommended), you should consult with AspenTech. If you've only changed some max limit (e.g., max number periods of optimization), we recommend you make the same change in the new version of the file.\n\nSimulation file\n\nNo changes required. If you observe different results, no convergence, etc. please prepare the files to send to AspenTech.\n\nTo help with the investigation, a first step is to ensure the \"old\" file is working correctly. Sometimes non-working files have been archived but it is not expected that a new version would perform very differently. If you still have the old version, you may want to generate reports, or \"kept results\" (using the snapshot manager). This will help comparing the new results.\n\nAspenTech does test new versions for backward compatibility, so if you face problems when migrating to a new version it is best to contact AspenTech and share files to reproduce the problem.\n\nExcel Interface\n\nAbout the Simulation Links: you may get errors since in V12 and onwards the system expects to get data from Column B (for sending inputs) and Column H (for getting results), because column A and G are for the setting \u2018display in report\u2019. The solution is to add the 2 columns or copying pasting the variables listed to column H to allow the Simulation Links worksheet to work fine.\n\nExcel automation interface has been extensively modified, so if you were using the old *.xla VBA methods/functions, you will have to update the code to use the new COM VBA automation. Refer to documentation for more information on automation methods provided by new COM add-ins.\n\nThe \"multiperiod\" macro is no longer needed.\n\nVersion 12.0 Specific Comments\nComponents: With V12.0, the default components in AIR streams were modified to include N2, NO2, SO2 and H2O. In earlier versions, NOX and SOX were referred to. While files from earlier versions should automatically convert NOX and SOX to NO2 and SO2, respectively. It is a best practice to review modified files after opening them in the new version to ensure all the updates occurred.\nFor example, if a path for NOX is written in the Flowsheet section of V10 (e.g. FeedAir.NOX) and it changed in V12.X (e.g. FeedAir.NO2), it will not be updated automatically. It needs to be updated manually in the Flowsheet section.\nThe Component Lists are different between V10/V11 and V12.X. In\nV10 there are two component lists (Default and Fuel)\nV11 there are two component lists (Default and Fuel)\nV12.X there are more component lists (Default, Fuel, Air, NC).\nFurthermore, the Fuel component list in V12.X contains CL2, H2O, N2 while they are not in V10.\nPay attention to the FeedAir block. It is more complete in V12.X regarding the composition of the air. In previous versions N2 and CO2 are not considered.\n\n Do not make any changes to the Model, Constraints, etc. beyond those noted above when moving the model to the new version. With the following exceptions:\nCheck feed streams to the models. Do the feed compositions sum up to 1? If not, make sure that they do. With V12.0, the feed compositions will be normalized. This is most likely to be an issue with AIR and FUEL streams. You will want to do this and run in your old version prior to the migration.\nSee the comments about the FeedAir block above.\n\n\nIssues Specific to Running via Aspen OnLine\n With the increased use of rigorous thermodynamic models for fuel and air streams, there is now more information in the default Aspen Properties file than in prior versions. This has lead to a longer time for opening and processing the information in the Aspen Properties file. Some users have found that the added time is leading to failures in online systems due to Aspen OnLine trying to kick-off runs too soon (i.e. before Aspen Utilities Planner is ready). \n\nIt is possible to work around this problem using the following instructions:\n\nThe user can load \u201caspenutilities.appdf\u201d which is located in Aspentech/Aspen Utilities Planner V12.X. To do this, open the .auf file and go to the Exploring tree/Component List/Configure Properties. The Physical Properties Configuration window will be open. From there, choose \u201cUse Properties definition file\u201d and look for aspenutilities.appdf.\n\n New scripts were added to V12.X: UpdateEquipmentStatus, UpdateContract, UpdateCosts, UpdateEquipmentLimits and UpdateAvailability.\n UpdateEquipmentStatus and UpdateContract are called from PreRecon script.\nUpdateCosts, UpdateEquipmentLimits and UpdateAvailability are called from PostRecon script.\nCalculateCosts is called from PostOpt script.\n\nIf you make use of a script with any of these names, please review the scripts as noted above to ensure that you are calling them as expected and that you are getting the desired behavior.\n\nAspen Tech is exploring providing a switchable parameter in \u201cGlobals\u201d to control whether or not these added scripts are called automatically in a future release.", "keywords": null, "reference": null}, {"id": "000101460", "problem_statement": "Why do I get \u201cLicense checkout\u201d error while launching eLearning from Aspen Plus?", "solution": "You might be facing \u201cLicense checkout\u201d error while launching eLearning from Aspen Plus due to tokens unavailability. Please note that eLearning requires additional 5 tokens apart from the tokens consumed for launching Aspen Plus, therefore you need to open SLM \u00e0 Load Server Details \u00e0 Check if enough tokens are available.\nIn case of tokens unavailability, no issues you can still use eLearning feature by downloading \u201ceLearning Launcher\u201d.\nTo achieve this, you can download the compressed zip file attached along with the KB article and then proceed to unzip it and then double click on the \"ELearningLauncher.exe\" file.\nYou can then browse and find the specific courses which you want to access.", "keywords": null, "reference": null}, {"id": "000101463", "problem_statement": "How to resolve the error \u2013 \u201cOutput Error in CSetDoubleValue for field ShadowPrice in table PrUtilitySale\u201d in Aspen PIMS V14?", "solution": "The error \u2013 \u201cOutput Error in CSetDoubleValue for field ShadowPrice in table PrUtilitySale\u201d is occurring in Aspen PIMS V14 due to the change in the columns in PrUtilitySale since previous versions of PIMS.\nThis change was performed to include the shadow price to be used in reporting. \nTherefore, in order to eliminate this error, the user has to be perform a very simple step of deleting the existing Results.mdb file from the Model folder and proceed to run the desired cases, this will recreate the Results.mdb file.", "keywords": null, "reference": null}, {"id": "000101458", "problem_statement": "How to resolve the error: \u201c-2146824584 \u2013 Unknown error 0x800A0E78\u201d in PIMS DR?", "solution": "The unexpected error \u201c-2146824584 \u2013 Unknown error 0x800A0E78\u201d is occurring while using PIMS DR due to an incorrect Model setting.\nThis can be resolved by going to Model Settings \u00e0 General Model Settings \u00e0 Output Database \u00e0 Options \u00e0 unchecking the \u201cUse Classic Output Database Format\u201d option:", "keywords": null, "reference": null}, {"id": "000101462", "problem_statement": "Why does the Adiabatic Gas and Isothermal Gas correlations are not available in steady state Pipe segment unit operation in Aspen HYSYS?", "solution": "The methods above have all been developed for predicting two-phase pressure drops. Some methods were developed exclusively for flow in horizontal pipes, others were developed exclusively for flow in vertical pipes, and some can be used for either. Some of the methods define a flow regime map and can apply specific pressure drop correlations according to the type of flow predicted. Some of the methods calculate the expected liquid holdup in two-phase flow while others assume a homogeneous mixture.\n\nHowever certain methods such as the Adiabatic Gas and Isothermal Gas correlations are only available to Aspen Hydraulics pipe segments, in Data > Solver Variables > Calculation Method. The normal pipe segment doesn\u2019t have them.", "keywords": "Adiabatic, isothermal, pipe correlation, Hydraulics, pipe segment", "reference": null}, {"id": "000101457", "problem_statement": "How to differentiate between EO and SM mode in Aspen HYSYS?", "solution": "In Aspen HYSYS, there are two different ways of solving simulations: Sequential Modular (SM) which solves blocks in sequence and Equation-Oriented (EO), which solves the entire flowsheet simultaneously.\n\nThe EO solver can be applied in Upstream, Midstream and Downstream processes. The EO solver has the capabilities to solve a complex simulation very fast. This make ideal to troubleshoot process operations in both offline and real time applications.\n\nThe sequential modular approach can have limitations for different processes depending upon the complexity of the process, some of these limitations are listed here:\nHighly heat integrated processes\nHighly recycled processes\nProcesses with many design specifications\nProcess optimization\nProcess model tuning through data reconciliation and parameter estimation.\nFor these, using the EO approach will result in faster solution time. At the same time, the EO strategy has one model for calibration and optimization which allows for more flexible variable specifications.\n\nAspen HYSYS\u2019s Equation-Oriented (EO) Modeling Option facilitates fast data reconciliation, model calibration and process optimization.\n\nIn EO mode, the user can switch a calculated variable to an input and vice versa each as a pair.\nThe results can be slightly different between SM and EO modes. This is because in EO only solver solves the whole simulation. In the SM mode there are multiple solvers such as in Adjust, Recycle, Column so on. Each solver in SM involves tolerance. This is the reason the residuals from all the solvers together could be high in the SM mode.\n\nEO solver provides faster solution enabling demanding computationally intensive applications, such as:\nConverging complex simulation containing many recycle blocks\nCase study and sensitivity analysis\nParameter estimation\nData reconciliation, and\nOptimization", "keywords": "Equation oriented, sequential, model, optimization", "reference": "https://esupport.aspentech.com/S_Article?id=000050569\nhttps://esupport.aspentech.com/S_Article?id=000096782"}, {"id": "000101459", "problem_statement": "What are the steps involved in Aspen Unified upgradation to V14?", "solution": "The following are the steps involved in Aspen Unified upgradation to V14:\n 1. Check if the HW and SW pre-requisites for Aspen Unified V14 are available in the machine in which it is going to be upgraded:\nHardware Requirements \u2013 Aspen Unified V14 \nServer Type - Minimum Recommended Computer & processor: Intel Core-i7 family or faster for standalone deployment Intel Xeon Gold series, 4 cores or more for multi-user environment Memory (RAM in GB): 16 + for standalone deployment, 32+ for multi-user environment Hard Disk (GB): 150 for standalone deployment, 300 for multi-user environment Monitor: Graphics hardware acceleration requires a DirectX10 graphics card and a 1440 x 900 or higher resolution monitor Network: 100 Mbps \nSoftware Requirements \u2013 Aspen Unified V14 \nOperating System Windows 11 Enterprise & Professional (64-bit)/ Windows 10 Enterprise & Professional (64-bit)/ Windows Server 2022/ Windows Server 2019/ Windows Server 2016 \nMicrosoft Office Office 365 (Desktop/Click-to-run) (32-bit/64-bit)/ Office 2016, 2019, 2021 (32-bit/64-bit) \nDatabase Servers Microsoft SQL Server 2019 (Enterprise/Standard/Express Edition) / Microsoft SQL Server 2017 (Enterprise/Standard/Express Edition) / Microsoft SQL Server 2016 (Enterprise/Standard/Express Edition) / Microsoft SQL Server 2014 (Enterprise/Standard/Express Edition) / SQL Server 2019 Local DB \nWeb Browsers Google Chrome Evergreen Version 75+/ Microsoft Edge \nRuntime Components .NET Framework 4.8 \nMicrosoft Visual C++ Redistributable 2017 \nMSXML 6 \nMicrosoft Visual C++ Redistributable 2019 \nVirtualization Microsoft Hyper-V (Server) \nVMware ESXi Hypervisor Server 6.x (Server) \n A more detailed pdf file highlighting all the pre-requisites is attached along with the KB article.\n\n2.Media file download: a) Log in to support account \u00e0 Support \u00e0 Downloads \u00e0 Download Center:\nDownload the one highlighted in green for AUP V14 installation:\n\n 3. For the Installation steps, have attached a pdf file highlighting the Aspen Unified V14 installation steps.\n4. Once you have successfully installed, you can now launch \u201cAspen Unified Configuration Manager\u201d, click on \u201cDatabase Management\u201d and proceed to upgrade all the databases.\n5. You can now launch Aspen Unified Homepage and proceed to open the desired model.", "keywords": null, "reference": null}, {"id": "000087776", "problem_statement": "To make modifications in the Aspen Properties Database Manager utility you must enter the database login name and password. What are the correct entries for the different versions of the database manager?", "solution": "The default login name and password is set during the installation of Aspen Properties Enterprise Database (APED). The correct values are shown below for the various versions.\n Version\nLogin Name\nPassword\n2006\naped\naped\n2006.5\naped065\nAprop100\nV7.0, V7.1, V7.2, V7.3\napeduser\nAprop100\nV7.3.2 and higher\napeduser2\nAproperty88#\n Note that both login name and password are case sensitive. The login name and password will not change from V7.3.2 onwards.", "keywords": "Aspen Properties Database Manager\nAPED\nlogin name\npassword", "reference": null}, {"id": "000101433", "problem_statement": "After performing upgrades, migrations, or new installations of the GDOT Web Server, users may encounter an error when attempting to save changes made to a diagram within the Web Viewer:\n \nThis error can occur if the GDOT Online Web Viewer lacks the necessary permissions to modify the Database associated with the open diagram.\n\nTo verify this issue, you can look for an error message stating 'attempt to write a readonly database' within the WebFrontEnd-Viewer-log.txt file, which you can find in C:\\ProgramData\\AspenTech\\GDOT Online\\Vxx.x\\logs", "solution": "To grant the necessary permissions to the GDOT Online Web Viewer, follow these steps:\nNavigate to the following directory: C:\\ProgramData\\AspenTech\\GDOT Online\\Vxx.x\\WebFrontEnd\\DataFiles.\nRight-click on the database file for which you want to configure permissions and select Properties.\nIn the Properties window, go to the Security tab and click the Edit... button. Then, click on Add\u2026\nIn the blank field, type \"IIS APPPOOL\\GDOT Online Web Viewer\" and click the \"Check Names\" button. If the window displays the correct user, click OK.\nIn the Permissions window, tick the boxes to Allow both \"Read\" and \"Write\" permissions for the GDOT Online Web Viewer user. Note that the \"Read & execute\" permission is not necessary in this context.\nClick OK on any remaining windows to save the changes.\nReopen the GDOT Web Viewer, and now you should be able to edit a diagram and save your modifications successfully.", "keywords": "WebFrontEnd, .db file, New Diagram, Setting up Rights, IIS", "reference": null}, {"id": "000061644", "problem_statement": "AspenTech strongly recommends stopping Aspen InfoPlus.21 before rebooting the Aspen InfoPlus.21 server. This ensures that a current snapshot and history data are saved to disk, and that all applications are shutdown cleanly.\nHowever, users sometimes reboot the Aspen InfoPlus.21 server without first stopping Aspen InfoPlus.21. This article explains how to automatically stop Aspen InfoPlus.21 when shutting down the Aspen InfoPlus.21 server.", "solution": "Microsoft allows you to create a local computer policy that activates a batch procedure that is activated before the server shuts down. For more information, please refer to Microsoft TechNet Article https://technet.microsoft.com/en-us/magazine/dd630947.\nAttached to this knowledge base article is a file named StopInfoPlus21.txt. Download this file to\nC:\\Windows\\System32\\GroupPolicy\\Machine\\Scripts\\Shutdown\nThe first line of the file is:\ncd /d C:\\Program Files\\AspenTech\\InfoPlus.21\\db21\\code\nUpdate this line, if necessary, to point your Aspen InfoPlus.21 code folder.\n Next, rename StopInfoPlus21.txt to StopInfoPlus21.bat.\n After renaming the file, you should test the batch procedure to verify it works. Note: Activating StopInfoPlus21.bat will stop Aspen InfoPlus.21.\n Use the Aspen InfoPlus.21 Manager to confirm Aspen InfoPlus.21 stopped.\nAfter confirming the batch procedure StopInfoPlus21.bat stops Aspen InfoPlus.21, you can cause the Windows operating system to activate the shutdown procedure by performing the following steps.\n1. Check if you have database security configured on your Aspen InfoPlus.21 server by opening the Aspen InfoPlus.21 Administrator, right-clicking on the database name, and selecting Properties. Then click the Permission tab.\n If there are no database roles defined, proceed to step 2.\nIf there are database roles defined, identify the one having Admin privilege. In this example, the role having Admin privilege is IP21Administrator. You can cancel Permission tab after identifying the Aspen InfoPlus.21 Administrator role.\nNext open the AFW Security Manager. Click on Roles, right-click on the Aspen InfoPlus.21 Administrator role, and select Properties.\n Select the Members tab and then click on the Advanced button.\n Enter NT AUTHORITY\\SYSTEM, select the option User, and then click on Add.\n AFW Security displays a warning message saying the role member is not verified and asks if you want to continue. Press Yes.\n This adds the user NT AUTHORITY\\SYSTEM as a member of the Aspen InfoPlus.21 Administrator's role.\n Press Apply and exit the AFW Security Manager.\n NOTE: At this point, you must wait at least 10 minutes before continuing to allow Aspen Local Security and Aspen InfoPlus.21 time to update security cache files.\n 2. Open C:\\Windows\\System32\\gpedit.msc as an administrator.\n3. Navigate to Computer Configuration | Windows Settings | Scripts (Startup/Shutdown).\n 4. Click on Shutdown, choose the Scripts tab on the next screen, and press the Add button.\n 5. Press Browse and select the file StopInfoPlus21.bat\n 6. Using the Aspen InfoPlus.21 Manager, stop Aspen InfoPlus.21 if it is running and uncheck the box STARTUP @ BOOT\n7. Reboot the Aspen InfoPlus.21 server without Aspen InfoPlus.21 running.\n8. After the server reboots, check the file StopInfoPlus21.log located in the Aspen InfoPlus.21 code folder. You should see lines similar to:\nStopping InfoPlus.21 due to server reboot at\u00c2 \u00c2 \u00c2 Wed 10/26/2016 11:20:13.91\n\nSuccess is returned by task service\n\nDescription received: InfoPlus.21 stopped successfully.\n\nInfoPlus.21 shutdown due to reboot completed at Wed 10/26/2016 11:20:20.11\n If you see lines similar to:\nStopping InfoPlus.21 due to server reboot at\u00c2 \u00c2 \u00c2 Wed 10/26/2016 11:58:28.05\n\nSuccess is returned by task service\n\nDescription received: The users access permission does not allow the operation\n\nInfoPlus.21 shutdown due to reboot completed at Wed 10/26/2016 11:58:28.16\n then you did not correctly add NT AUTHORITY\\SYSTEM as a member of the Aspen InfoPlus.21 Administrator's role or you did not wait long enough for Aspen Local Security and Aspen InfoPlus.21 to update their security cache files.\n 9. Start Aspen InfoPlus.21 and reboot the Aspen InfoPlus.21 server again.\n10. After the server reboots, check the file error.log for one of your repositories. You should see lines indicating the repository stopped normally when the system rebooted.\n26 Oct 16 11:59:35 - ARCHIVE: archiver is requested to stop by Process id=4460\n\n26 Oct 16 11:59:35 - *C:\\Program Files\\AspenTech\\InfoPlus.21\\c21\\h21\\bin\\h21shutdown.exe*\n\n26 Oct 16 11:59:35 - ARCHIVE: normal shutdown complete\n\n26 Oct 16 11:59:35 - ARCHIVE: program shutdown\n 11. Use the Aspen InfoPlus.21 Manager to check the box STARTUP @ BOOT (if desired) and restart Aspen InfoPlus.21.\nNote: The best practice is to stop Aspen InfoPlus.21 using the Aspen InfoPlus.21 Manager before stopping your server.\nNote: This procedure assumes that it takes less than five minutes to stop your Aspen InfoPlus.21 database. After five minutes, the procedure times out and allows Windows to continue shutting down. Any historical information not saved to disk will be lost, and your database could be damaged if Aspen InfoPlus.21 cannot save the in-memory database to a snapshot.", "keywords": "reboot\nrestart\nshut down\ngraceful\nshutdown\nsnapshot\nresurrect\nresurrected", "reference": null}, {"id": "000083913", "problem_statement": "How to to use external fortran subroutines on a PC without a fortran compiler?", "solution": "You can use the ASPLINK command to create your own shared libraries (.DLL files) containing the object files needed for user models. By creating your own shared libraries, you can avoid the need for Aspen Plus to link a run-specific user model shared library for each run.\n\nTo create the shared library (.DLL file) for a Fortran user model in an Aspen Plus flowsheet:\nFrom the Aspen Plus Simulation Window, type \"ASPCOMP subroutinename[.f]\" in the working directory for each Fortran subroutine. The compiled object (.obj) file must be located in the working directory, or in a directory specified in a Dynamic Link Options (*.opt) file, in order to be linked in Aspen Plus. However, linking also requires a Fortran compiler.\n A group of object files can be dynamically linked into a .DLL file, by moving them to another directory and typing in ASPLINK at the command prompt in that directory. By default asplink includes all of the object module files present in the run directory. The .DLL file can then be moved back to the working directory in lieu of the concerned object files and used with a Dynamic Link Options (DLOPT) file (.opt) file.\n To use the shared library (.DLL file) for a Fortran user model in an Aspen Plus flowsheet:\nIn a Dynamic Link Options (DLOPT) file (.opt), specify the location of the .DLL file. If the .DLL file is also located in the working directory the complete path is not needed. Aspen Plus will locate object (.obj) files in the working directory; however, shared library (.DLL) files will only be located using the Dynamic Link Options (.opt) file. For more information on Dynamic Link Options (DLOPT) files see Solution 102368.\n In the Aspen Plus simulation, select Settings from the Run menu, and type the name of the Dynamic Link Options (.opt) file in the Linker options field. If the DLOPT file is located in the working directory the complete path is not needed. The run settings are saved in the simulation file.\nThe backup file accompanied by a shared library (.DLL file) and Dynamic Link Options (.opt) file will run on a computer without a compiler.\nFor more information see the Aspen Plus User Models", "keywords": "DLL, FORTRAN, Subroutine, DLOPT, ASPLINK", "reference": "Manual, Chapter 1"}, {"id": "000101453", "problem_statement": "Aspen Plus has CO2 Capture examples for Piperazine (PZ) and 2-amino-2-methyl-1-propanol (AMP) individually. \nIs there an example that includes both of these amine solvents?", "solution": "A process model is developed for CO2 capture by using a second generation of amine solvent, the aqueous blend of 2-amino-2-methyl-1-propanol (AMP) and piperazine (PZ). This blend is being actively investigated and considered as a good candidate to support commercial deployment and drive down costs of post-combustion carbon capture. The thermodynamic representation of the blend combines our previous work for the amine-water-CO2 systems of AMP and PZ [1, 2], additional interaction parameters of the AMP-PZ-water-CO2 system are fitted to the experimental data of vapor liquid equilibrium [3, 4] and absorption heat [3]. The absorber and stripper are modeled with rate-based columns and validated with pilot plant data [5]. Finally, an example for industrial scale CO2 capture from natural gas-based flue gas is developed.\n\nOur previous study has shown the superiority of the rate-based modeling approach over the traditional equilibrium-stage method. The rate-based model can provide excellent predictive capabilities and is very useful for design and scale-up of CO2 absorption process. In this work, the absorber and stripper are modeled with rate-based columns and validated with the pilot plant data [6], the predicted key output variables of CO2 removal percent and specific reboiler duty are comparable to the data. This example uses a MAKEUP block (new in V14) and a Charge Balance (CHARGEBAL) block to help converge the simulation.\n\nIn conclusion, the aqueous blend of AMP and PZ is a good candidate for supporting commercial deployment and driving down costs of post-combustion carbon capture. A thermodynamic model is developed for CO2 capture by using the mixed solvents of AMP and PZ, then combined with the kinetic reactions to model the absorber and stripper with rate-based columns and validated with pilot plant data, finally, an example of industrial scale CO2 capture using AMP and PZ is developed. This example is useful as a starting point for more sophisticated model development.\n\nSee the attached documentation for more information. The Aspen Plus backup (.bkp) file can be opened in V14 and higher.", "keywords": null, "reference": "s\nAspen Plus documentation, Rate-based model of the CO2 capture process by AMP using Aspen Plus. Aspen Technology Inc, Cambridge, MA. 2022\nAspen Plus documentation, Rate-based model of the CO2 capture process by PZ using Aspen Plus. Aspen Technology Inc, Cambridge, MA. 2022\nHartono, A., Ahmad, R. Svendsen, H. F., Knuutila, H. K. New solubility and heat of absorption data for CO2 in blends of 2-amino-2-methyl-1-propanol (AMP) and piperazine (PZ) and a new eNRTL model representation, Fluid Phase Equilib., 2021, 550:113235\nBruder, P., Grimstvedt, A., Mejdell, T., Svendsen, H. F. CO2 capture into aqueous solutions of piperazine activated 2-amino-2-methyl-1-propanol, Chem. Eng. Sci., 2011, 66, 6193\u20136198\nMorgana, J. C., Campbell, M., Putta, K. R., Shah, M. I., Matuszewski, M., Omell, B. Development of process model of CESAR1 solvent system and validation with large pilot data, 16th International Conference on Greenhouse Gas Control Technologies, GHGT-16, 2022, Lyon, France\nHartono, A., Ahmad, R., Usman, M., Asif, N., Svendsen, H. F. Solubility of CO2 in 0.1M, 1M and 3M of 2-amino-2-methyl-1-propanol (AMP) from 313 to 393K and model representation using the eNRTL framework, Fluid Phase Equilib., 2020, 511:112485\n\nVSTS 448880"}, {"id": "000101451", "problem_statement": "Which method is uses to calculate flash point temperature in Aspen HYSYS?", "solution": "The standard method that Aspen HYSYS uses to calculate flash point in the Cold Properties Utility is API procedure 2B7.1. You could get more information by that literature.\nThe same method is used in stream properties (from correlation manager in Refsys) to estimate the flash point. However, the results of these two scenarios are different.\n\nBy default, HYSYS follows API procedure 2B7.1 in calculating flash point of petroleum fractions. The equation used to estimate the flash point is as follows:\n1/T(FP) = -0.014568 + 2.84947 / (T1) + 1.903e-3 * ln(T1)\nWhere T(FP) is the flash point (Penske-Martens Closed Cup, ASTM D93), in degrees of Rankine and T1 is the ASTM D86 10% temperature in degrees Rankine.\n\nThe accepted range is 150 F < ASTM D86 10% point < 1150 F. If the ASTM D86 10% point is outside this range, HYSYS will not calculate a Flash Point and will show the Flash Point as .\nThe standard method that HYSYS uses to calculate flash point in the Cold Properties Utility is as mentioned in the HYSYS Operations Manual in the Utilities Chapter, under the Cold Property utility: HYSYS follows API procedure 2B7.1 in calculating flash point of petroleum fractions. The equation used to estimate the flash point is as follows:\n1/T(FP) = -0.014568 + 2.84947 / (T1) + 1.903e-3 * ln(T1)\nWhere T(FP) is the flash point (Penske-Martens Closed Cup, ASTM D93), in degrees of Rankine and T1 is the ASTM D86 10% temperature in degrees Rankine.\n\nThe accepted range is 150 F < ASTM D86 10% point < 1150 F. If the ASTM D86 10% point is outside this range, HYSYS will not calculate a Flash Point. The accuracy of using this estimation method to represent the flash point of petroleum fractions is as claimed by API.\nIf the ASTM D86 10% point is beyond the above range of applicability (you can see the ASTM D86 10% by adding a BP Curves Utility to the stream) of applicability for the API 2B7.1 method, HYSYS cannot calculate it and it will show Empty.\nDifferent flash point methods are applicable for different type of streams. A wrong method can lead to a very wrong result.\n\nThe default method for Flash Point calculations in HYSYS (via the Cold Properties Utility, for example) is calculated by the API 2B7.1 method. The temperature range for the flash point calculation are not the actual process temperature but the ASTM D86 10% point. The accepted range is 65 C (150 F) < ASTM D86 10% point < 621 C (1150 F).\nAccording to ASTM, the TAG method covers the determination of flash point of liquids with a viscosity below 5.5 cST at 40C (104F), or below 9.5 cSt at 25 C (77 F) and a flash point below 93C (200F).\nThe Pensky-Martens method is used for liquids with a viscosity of 5.5 cSt or above at 40C (104 F), or a viscosity of 9.5 cSt or above at 25 C (77 F), or a liquid that may contain suspended solids or have a tendency to form a surface film under test conditions. It is basically for heavier fluids than the TAG method. The flash point range is 40 C (104 F) - 360 C (572 F).\n\nIf you select Petroleum | Flash Point from the Configuration list, then Intercept, Coeff D86 IBP, Coeff D86 5%, and D86 IBP volume% appear in the Parameters table. However, these four parameters are only used for the flash point calculation when you select Linear D86 Based from the Method drop-down list.", "keywords": "Aspen HYSYS, Flash Point, Temperature", "reference": null}, {"id": "000101450", "problem_statement": "How to adjust the enthalpy reference in Aspen HYSYS?", "solution": "The enthalpy reference state used in HYSYS is the heat of formation of an ideal gas at 25 C.The absolute enthalpy values generated by HYSYS can be converted to a different enthalpy basis.\n\nFor example, to convert HYSYS enthalpy values to an enthalpy basis of saturated liquid at 0 C, the following three steps can be used:\n1. Obtain a set of enthalpy values (Hi0) of all pure components at the desired enthalpy basis condition (in this case saturated liquid at 0 C). These values can be obtained by performing a flash calculation in HYSYS for individual pure component streams at a specified vapour fraction of 0 at the temperature of 0 C.\n2. Obtain an enthalpy offset for a stream whenever the enthalpy conversion is required. With the known stream composition, the enthalpy offset could be calculated as the summation of x(i)*Hi0, assuming no chemical reaction(s) involved.\n3. Subtract this offset from the stream enthalpy value calculated by HYSYS at the stream condition. The resulting enthalpy would be the value corresponding to the new enthalpy basis (in this case saturated liquid at 0 C).\n\nPlease note that this conversion is only applicable when you want to compare HYSYS enthalpy values with other enthalpy data represented using a different reference state. You cannot force HYSYS to change its default enthalpy basis in its enthalpy calculation\n\nHYSYS uses an enthalpy basis of the heat of formation of the ideal gas at 25C and 1 atm. This reference is hard coded and you cannot force to change this reference. That said, you can convert calculated value of enthalpy into another basis by studying the differences and adjusting the differences. These calculations are conversion only and not a change in the default basis.\nIdeal gas enthalpy for a stream is calculated as H = polynomial+offset+formation", "keywords": "Aspen HYSYS, Enthalpy, Referenc", "reference": null}, {"id": "000101449", "problem_statement": "How to find the \u201cIdeal\u201d equation of state in Aspen HYSYS?", "solution": "Ideal equation of state is not a fluid package available in the Aspen HYSYS library.\nYou can find Ideal property package by selecting Aspen Properties as component list and selecting Aspen Properties property package, filter as COMMON.\n\n\n\nNote that you will need to select your component list from Aspen Properties library as well in order to keep both component list library & fluid package library consistent.", "keywords": "Aspen HYSYS, Fluid Package, Ideal", "reference": null}, {"id": "000101448", "problem_statement": "Which thermodynamic package is best suited here to predict any NOx formation or ammonia combustion in Aspen HYSYS?", "solution": "For the given conditions of pressure range (100 kPa ~ 20000 kPa) and components (H2, N2, NH3), I would recommend using the Peng-Robinson (PR) fluid package in HYSYS. The Peng-Robinson equation of state is a widely used model that can accurately represent the behavior of a wide range of hydrocarbon and non-hydrocarbon mixtures, including gases like H2, N2, and NH3. It is particularly suitable for moderate to high-pressure systems and can handle various types of phase equilibria.\n\nIf there is a liquid phase in the system, an alternative fluid package that can be used for the same pressure and component range is the NRTL (Non-Random Two-Liquid) package. The NRTL package is an advanced equation that can more accurately model the behavior of mixtures in the liquid phase. It is particularly useful when dealing with non-ideal mixtures and can handle a wide range of liquid states and phase equilibrium. By using the NRTL fluid package, you can obtain more accurate and reliable results for systems with a liquid phase.", "keywords": "Aspen HYSYS, NOx, Ammonia", "reference": null}, {"id": "000101447", "problem_statement": "How to resolve crashing issue on making any changes to file simulation case if property package is selected as \"acid gas\u201d in Aspen HYSYS?", "solution": "In order to get rid of the crashing issue and its related error message \"Failure Generating Aspen Properties Problem Definition:\n\n\nYou need to install Aspen Plus in order to use Acid Gas fluid package in HYSYS. The error is coming from issues in the installation. Acid Gas fluid package uses Aspen Properties databank. If Aspen Properties databank is not properly installed, or not installed at all, Acid Gas fluid package cannot retrieve the necessary parameters to generate the physical properties.", "keywords": "Aspen HYSYS, Acid Gas, Crash", "reference": null}, {"id": "000101446", "problem_statement": "How to get rid of the following error message \"Error while executing Calculator block. Error starting or running excel\" in Aspen Plus?", "solution": "In order to get rid of the following error message:\n\n\nYou will need to perform a quick repair on Office. Go to (Control Panel | Repair | Apps | Microsoft Apps for Enterprise | Modify | Quick Repair).", "keywords": "Aspen Plus, Calculator Block, Excel", "reference": null}, {"id": "000101440", "problem_statement": "This article described where are stored the DMC3 Applications Automatic snapshot, that the server store.", "solution": "The RTE service has the capability to take and store automatically snapshots of the deployed DMC3 applications. Typically, the snapshot intervals and History retention can be managed from the Configure Online Application.\n\n\n\nNevertheless, due to the usage of the RTE service is recommended not to change the default parameters.\n\nThe retention of these snapshots can be useful to recover projects or controllers that may be corrupted or disappear from the server. Or projects that no longer exist but the back up of the controller is required.\n\nThese snapshots are saved as apc.app applications which can be imported in DMC3 Builder and normally are stored on the following path of the Online Server.\n\nC:\\ProgramData\\AspenTech\\RTE\\V14\\Clouds\\Online\\sys\n\nA Folder for every deployed application will be created on the folder and inside you will encounter the snapshot retained.\n\n\n\n\n\nThis File can be copy into another location and then can be imported into DMC3 Builder.", "keywords": "DMC3 Builder, Snapshots, Online Server", "reference": null}, {"id": "000101439", "problem_statement": "This article described the different configuration and connection option that can be used in the interaction between DMC3 Builder and APC Online Server.", "solution": "Typically, on the installation of an APC Online Server will consist of the Online Components which is basically all the services and files that allow the DMC3/DMC plus controllers to run and perform all controller calculations (typically this will be the ACO components and the RTE components) and the bundle of Desktop applications (DMC3 Builder, DMCplus Model, DMCplus Config etc.).\n\nNevertheless, in the case of DMC3 Builder there is not really an automatic process that allows the connection between the Online Server component (RTE) and DMC3 Builder. During the process configuration is required to open DMC3 Builder then go Online and Add a New Server. The attached PDF File, show a step by step guide on the different configuration that can be done to connect DMC3 Builder to and Online Server", "keywords": "DMC3 Builder, Online Server, TCP/IP", "reference": null}, {"id": "000098885", "problem_statement": "It is possible to change the History Plot option on the Production Control Web Server going to Preferences and checking the Web.21 HPT box.\n \nHowever, this only applies for the user that is currently using the PCWS, the procedure below will make the Web.21 HPT the default option for all users without changing it one by one.", "solution": "NOTE THE FOLLOWING:\nThis procedure is only for V12.1 and later versions.\nIn V14 or higher the default browser is set to Web.21\n\nTo solve this problem, first open a file explorer and navigate to folder\nC:\\inetpub\\wwwroot\\AspenTech\\ACOView.\n\nIn that folder, modify the following two files, using a text editor (Notepad, Notepad++ or equivalent):\n\n1. In file UserPerf.asp, modify line 269 as shown in the image below and Save it.\n \n2. In file LIBCODE.asp, modify line 23 as shown in the image below and Save it.\n\n\n\nOnce the two files have been changed, clear the browser cookies for the /atcontrol website.\nAfter this, the preference should always default to Web.21 HPT instead of A1PE.", "keywords": "Production Control Web Server, Web.21 HPT", "reference": null}, {"id": "000101432", "problem_statement": "In V14.0.2, Aspen Process Pulse introduced a new feature that would allow for the disabling of configuration start when a Kaiser Raman Analyser was in an error or warning state. Given the number of reasons that can cause these states, this may have a negative impact on any customer using a Kaiser instruments data source in Aspen Process Pulse.", "solution": "If you are running into this issue and need instructions on a solution, please contact AspenTech support at esupport@aspentech.com \n Fixed in Version\nFixed in V14.2", "keywords": "Process Pulse \nKaiser Raman Analyser \nInstrument Status \nError State", "reference": null}, {"id": "000074974", "problem_statement": "An event.dat file is usually created when the history archiver process, h21archive.exe, is either stopped or it dies due to history system problems. However, when the Aspen InfoPlus.21 (IP.21) database is restarted and the h21archive.exe process resumes its activity, the event.dat file should unbuffer and the data it contains should get inserted successfully into the appropriate IP.21 file sets.\nThere have been cases, however, where some anxious operators, not sure whether the event.dat file was unbuffering, have deleted or renamed it and then restarted the system. (See solution 000079281 (formerly 103995) to find out how to conclusively determine if the event.dat file is unbuffering.)\nBased on the above scenario, the following question might arise: \"Is it possible to restore the event.dat file if it has been renamed or deleted?\"", "solution": "NOTE: This solution is based on the assumption that you have Aspen Cim-IO Store and Forward configured and running on your system and that the Windows Recycle Bin (holding the event.dat) has not been emptied. Store and Forward is not mandatory but can help prevent data loss while IP.21 is down.\nThe answer to the question posed above is: \"YES, it is possible to restore the event.dat file\".\nBelow is a step-by-step procedure describing how to do it:\n1. Stop the InfoPlus.21 database; a Store file will be created on the Cim-IO Interface server.\n2. Restore the deleted event.dat file from the Windows Recycle Bin to its default location or rename it back to its original name if the name has been changed.\n3. Start IP.21; it will detect an existing event.dat file and reinsert data into the IP.21 file sets (assuming that the Past Time parameter for the specific repository (screenshot below) has not been exceeded).\n\n4. At this point a new event.dat file may be briefly created as the h21archive.exe process is flooded by data forwarded from the CIMIO Store file; as soon as all of the Store file data has been forwarded, the new event.dat file should completely unbuffer and disappear.", "keywords": "107607-2\n107607", "reference": null}, {"id": "000101412", "problem_statement": "Is there another way to see the Control Objectives of a variable apart from checking them on the PCWS Operations tab?", "solution": "We can use an SQL query to call back the \u201cAWTagMessage\u201d data, which contains the Control Objective information from a collecting controller in Aspen Watch.\n\nHere\u2019s a sample query you can modify according to your own controller and variable names:\nWRITE '';\nWRITE 'Time: '||CURRENT_TIMESTAMP;\nWRITE '';\n\nselect TagMessage from AWTagMessage('','DEMOCOL12_DMC3','REBBTU','','','') order by LineNumber asc\nReplace DEMOCOL12_DMC3 with the name of your controller application and REBBTU with the MV/CV you want to check the objectives of. Or leave the variable name field blank to see a general description of the control objectives for the whole controller.\n\nThe last two parameters are reserved for the Start and End times to review the control objectives messages. The format is \u2018DD-MMM-YYYY HH:MM:SS\u2019. If you leave these fields blank, the system will look for the most recent message in the last 24 hours.\n\n\n\nFor a more friendly view of the result, run the query on Aspen SQLplus, then go to File -> Save Output, and save the file with .html extension, which you can open using any browser application.\n\n \n\nSample query result in .html view:\n\n\n\nHere\u2019s a complete description of the query:\n Procedure: AWTagMessage\n\nPurpose: Generate an Aspen Watch controller objectives message\n\nArguments:\n\nArg_FolderDefString - Name of currently selected AW Folder (obsolete argument).\n\nArg_ControllerName - DMCplus controller name\n\nArg_VariableName - Independent or dependent variable name (optional)\n\nArg_ParameterName - Parameter name (optional)\n\nArg_StartTimeString - Start time for system review period (optional - for fetching list of messages defaults to 24 hours back from end time).\n\nArg_EndTimeString - End time for system review period (optional - set to blank for current time).\n\nSample Usage:\n\nAWTagMessage('','COL5X3','','','10-FEB-2010 00:00:00','12-FEB-2010 00:00:00');\n\nAWTagMessage('','COL5X3','FIC-2001','','','12-FEB-2010 00:00:00');\n\nAWTagMessage('','COL5X3','FIC-2001','','','');\n\nAWTagMessage('','COL5X3','','','','');", "keywords": "APC, DMC3, DMCplus, Web Server, Controller, Tag", "reference": null}, {"id": "000101354", "problem_statement": "This video demonstrates how to enable debug logging in Aspen Process Data Administrator.", "solution": "", "keywords": null, "reference": null}, {"id": "000099493", "problem_statement": "On Jun 8, 2021, Microsoft released KB004442 in response to a vulnerability found in DCOM communications (CVE-2021-26414). In the KB, Microsoft announced a plan to harden DCOM communications with several Windows security updates. When these Microsoft security patches are applied, DCOM settings will be updated which may cause some of the AspenTech applications to no longer exchange data. The increased security will be optional at first, and then will be mandatory with the final Windows Security update (scheduled for March 14, 2023). The affected AspenTech applications will need to be updated to avoid problems when the hardened DCOM communication changes are applied. This article provides a list of AspenTech applications that need to be updated before the Microsoft DCOM hardening changes are applied.\n \nMicrosoft Update Schedule\nThe following is a summary of the DCOM hardening update schedule according to Microsoft KB004442:\n Update Release Behavior Change\nJune 8, 2021 Hardening changes disabled by default but with the ability to enable them using a registry key.\n June 14, 2022 Hardening changes enabled by default but with the ability to disable them using a registry key.\n March 14, 2023 Hardening changes enabled by default with no ability to disable them. By this point, you must resolve any compatibility issues with the hardening changes and applications in your environment.\n Versions of Microsoft Windows Affected\nAs stated in the Microsoft KB article, the DCOM hardening is only available for the following subset of Windows Versions:\nWindows version Available on or after these dates\nWindows Server 2022 September 27, 2021\n(KB5005619)\nWindows 10, version 2004, Windows 10, version 20H2, Windows 10, version 21H1 September 1, 2021\n(KB5005101)\nWindows 10, version 1909 August 26, 2021\n(KB5005103)\nWindows Server 2019, Windows 10, version 1809 August 26, 2021\n(KB5005102)\nWindows Server 2016, Windows 10, version 1607 September 14, 2021\n(KB5005573)\nWindows Server 2012 R2 and Windows 8.1 October 12, 2021\n(KB5006714)\n\nIf you are using an earlier version of Windows, then the hardening will not be applied by Windows updates, and your DCOM application will not be affected\n\n AspenTech Products Using DCOM Affected\nBelow is a list of products currently known to be affected by the Microsoft Security update:\nProduct Versions Planned Patch Release Date\nAspen CIM-IO Server V10.1 , V11.0.1, V12.0, V12.2 May 16, 2022\nAspen Audit & Compliance Manager Administrator (mmc snap in) V10.0.1, V11.0.1,V12, V12.2 May 16, 2022\nAspen Production Record Manager (APRM) V10.0.1, V11.0.2, V12, V12.2 May 16, 2022\nAspen Calc V10.0.1, V11.0.2, V12, V12.2 May 16, 2022\nAspen Process Data V10.0.1, V11.0.2, V12, V12.2 May 16, 2022\nAspen Process Graphic Studio V10.0.1, V11.0.2, V12, V12.2 May 16, 2022\nAspen Data Source Administrator (multiple MES products affected) V10, V11, V12 and V12.2 May 16, 2022\nAspen Tank and Operations Manager (ATOMS) V10.0.1, V11.0.1, V12.0.1 May 16, 2022\nAspen DMC3 V10, V11, V12, and V12.1 May 16, 2022\nAspen DMC+ V10, V11, V12, and V12.1 May 16, 2022\nAspen MTell (only affects PHD Adapter) V11, V12 May 16, 2022\nAspen Basic Engineering V11, V12 March 31, 2022", "solution": "If you are using any of the products listed above here that are affected by the Microsoft change, then you should do the following:\nDo not manually enable the hardening changes until the corresponding AspenTech patch(es) are applied. \nIf the hardening changes have been manually enabled on your system, then disable the changes (via registry, as specified in the Microsoft KB) until the corresponding AspenTech patches are applied\nApply AspenTech patches (listed in the table below) before the \u201cJune 14th,2022 Microsoft Security update\u201d\nList of Product Patches available:\nSr.no\nName of the Product\nVersion\nPatch link\nRelease Date\n1\nAspen Basic Engineering ( ABE )\nV11\nV11 Patch\nMarch 31, 2022\n2\nAspen Basic Engineering (ABE)\nV12\nV12 Patch\nMarch 31, 2022\n3\nAspen Tank and Operations Manager (AtOMS)\nV10\nV10 Patch\nMay 16, 2022\n4\nAspen Tank and Operations Manager (AtOMS)\nV11\nV11 Patch\nMay 16, 2022\n5\nAspen Tank and Operations Manager (AtOMS)\nV12\nV12 Patch\nMay 16, 2022\n6\nAspen DMC3\nV10\nV10 Patch\nMay 16, 2022\n7\nAspen DMC3\nV11\nV11 Patch\nMay 16, 2022\n8\nAspen DMC3\nV12\nV12 Patch\nMay 16, 2022\n9\nAspen DMC3\nV12.1\nV12.1 Patch\nMay 16, 2022\n10\nAspen Audit Compliance Manager\nV10\nV10 Patch\nMay 16, 2022\n11\nAspen Audit Compliance Manager\nV11\nV11 Patch\nMay 16, 2022\n12\nAspen Audit Compliance Manager\nV12\nV12 Patch\nMay 16, 2022\n13\nAspen Audit Compliance Manager\nV12.2\nV12.2 patch\nMay 16, 2022\n14\nAspen InfoPlus.21 Browser Graphic Studio\nV10.1\nV10.1 Patch\nMay 16, 2022\n15\nAspen InfoPlus.21 Browser Graphic Studio\nV11\nV11 Patch\nMay 16, 2022\n16\nAspen InfoPlus.21 Browser Graphic Studio\nV12\nV12 Patch\nMay 16, 2022\n17\nAspen InfoPlus.21 Browser Graphic Studio \nV12.2\nV12.2 Patch\nMay 16, 2022\n18\nAspen InfoPlus.21\nV10.1\nV10.1 Patch\nMay 16, 2022\n19\nAspen InfoPlus.21\nV11\nV11 Patch\nMay 16, 2022\n20\nAspen InfoPlus.21\nV12\nV12 Patch\nMay 16, 2022\n21\nAspen Process Data\nV10.1\nV10.1 Patch\nMay 16, 2022\n22\nAspen Process Data\nV11\nV11 Patch\nMay 16, 2022\n23\nAspen Process Data\nV12\nV12 Patch\nMay 16, 2022\n24\nAspen Process Data\nV12.2\nV12.2 Patch\nMay 16, 2022\n25\nAspen Production Record Manager\nV10.1\nV10.0 Patch\nMay 16, 2022\n26\nAspen Production Record Manager\nV11\nV11 Patch\nMay 16, 2022\n27\nAspen Production Record Manager\nV12\nV12 Patch\nMay 16, 2022\n28\nAspen Production Record Manage\nV12.2\nV12.2 Patch\nMay 16, 2022\n29\nAspen Calc\nV10.1\nV10.1 Patch\nMay 16, 2022\n30\nAspen Calc\nV11\nV11 Patch\nMay 16, 2022\n31\nAspen Calc\nV12\nV12 Patch\nMay 16, 2022\n32\nAspen Calc\nV12.2\nV12.2 Patch\nMay 16, 2022\n33\nAspen Cim-IO Core\nV10.1\nV10.1 Patch\nMay 16, 2022\n34\nAspen Cim-IO Core\nV11\nV11 Patch\nMay 16, 2022\n35\nAspen Cim-IO Core\nV12\nV12 Patch\nMay 16, 2022\n36\nAspen Cim-IO Core\nV12.2\nV12.2.1 CP1 Patch\nMay 16, 2022\n This article will be updated when additional information is available. Please write to esupport@aspentech.com if any further details required. \nRelated Articles :\nImpact to Aspen GDOT of DCOM security changes", "keywords": "Microsoft KB5004442\nDCOM communications\nMicrosoft Security Update, 2022\nAspen Infolplus21 Server\nAspen CIMIO Server\nAtOMS\nAspen DMC+\nAspen DMC3\nAACM\nAspen MTell\nAspen Basic Engineering", "reference": null}, {"id": "000101411", "problem_statement": "Is it possible to access column hydraulic results in Aspen Online (AOL)?", "solution": "Most numerical fields visible in the Aspen Plus or Aspen Hysys User Interface can be accessed in AOL. Aspen Simulation Workbook (ASW) should be used to test the connections. \nIf you can link fields to ASW you can link them in AOL.\n\nFlooding factor and other net metrics are exposed; but, the vapor/liquid boundary coordinates are not exposed. The boundary needs to be inferred from results that are reported which requires adding formula tags to do some calculations.\n\nExtended Hydraulic results are available in the profiles, but they are not turned on by default. The option needs to be checked on the RadFrac Analysis | Report | Property Options form.\n\n\n\nThere is no way to host visualization forms directly from Aspen Plus or HYSYS online \u2013 these need to be rebuilt from data using native features of the historian or other visualization tool used in the field.\n\nAttached is a summary of details of how to access some of the hydraulic results.", "keywords": null, "reference": null}, {"id": "000093710", "problem_statement": "The AeBRSClientConfigure.exe tool is used on Client machines that will be connecting to the Aspen Production Execution Manager server us the MOC (Manufacturing Operations & Control) application.", "solution": "The AeBRSClientConfigure.exe tool will update the AeBRS.CMD file which is needed to run the MOC application. This tool will connect to the server hosting the Aspen Production Execution manager and use the configuration settings of the server to set the client machine.\n\nNote: This Tool will update the AeBRS.CMD file and run the codify_all.bat located in the AeBRS directory.\n\n1. Login to client machine as a network domain user who is an administrator for both client and server machines.\n2. Locate for the tool under: C:\\Program files (x86)\\AspenTech\\AeBRS\n3. Launch AeBRSClientConfigure.exe tool, answer Yes when asked for program elevation.\n4. Enter the server name to connect in the launched dialog box and click OK.\n\nThis action is Mandatory before running the MOC application.", "keywords": null, "reference": null}, {"id": "000100242", "problem_statement": "What could cause \"Error 1053: The service did not respond to the start or control request in a timely fashion\" when restarting Apache Tomcat service?", "solution": "This error could be caused by that the Tomcat service shutdown port 8005 being used by other applications.\n\nWe could check the catalina.log on \"C:\\Program Files\\Common Files\\AspenTech Shared\\Tomcat9.0.27\\logs\". If we saw an entry \"SEVERE [main] org.apache.catalina.core.StanardServer.await Failed to create server shutdown socket on address [localhost] and port [8005] (base port [8005] and offset [0])\", this indicates that the port 8005 is being used by another application. \n\nIn a Windows OS Command Prompt, run netstat -aon | findstr \":8005\" to find out which application is using port 8005.\n\n\n\nIf the application is required to use port 8005, we could open the server.xml (C:\\Program Files\\Common Files\\AspenTech Shared\\Tomcat9.0.27\\conf) and modify the to another port that is not being used. For example, \n\n\n\nAfter modifying the server.xml, please restart the \"Apache Tomcat x.x Tomcat\" service and \"SolrWindowsService\", both of them should run without issue now. If the issue persists, please contact Aspen support. \n\nNote: In June 2023 a customer experienced the problem with port 8005 being in use but it did not manifest with an Error 1053. In that situation the Apache Tomcat would start normally but would crash on attempts to access aspenONE Process Explorer (A1PE) or the A1PE Admin.\n KeyWords\nError 1053\nTomcat service\nport 8005", "keywords": null, "reference": null}, {"id": "000083404", "problem_statement": "How can I model the equilibrium reaction C (solid) + CO2 <==> 2 CO (Boudouart reaction)?", "solution": "It is possible to use a RGIBBS block to model this reaction; however, you need to specify the component types carefully.\nFirst, on the Components | Specifications form, we need to select the component type \"Solid\" for C. This means that solid property models will be used to calculate carbon properties. In addition, if solid substreams such as CISOLID are used, the graphical interface will show C as a choice when a solid component can be specified.\n\n\n\nThis RGIBBS can handle solids in either the MIXED or CISOLID substreams. The CISOLID (Conventional Inert Solid) substream is inert to phase equilibrium but not to chemical reaction equilibrium.\nThe two files attached that can be opened in V11 and higher. The file rgibbsV11conven.bkp uses the MIXED substream for all of the components, and rgibbsV11cisolid.bkp used the CISOLID substream for the carbon.\n\nTo use the CISOLID substream for Carbon, on the Setup | Specifications form, select the stream class MIXCISLD so that the solid carbon (coke) will be in the CISOLID substream while CO and CO2 will be in the MIXED substream (which represents vapor and liquid phases. Using a solid substream will make it easier to separate the solid using solid unit operation models.\n\nTo use the MIXED substream for Carbon, on the Setup | Specifications form, select the stream class CONVEN so that the solid carbon (coke) will be in the MIXED substream along with the CO and CO2.\nIn the feed stream, we specify some feed flowrate of CO2 in the MIXED substream. The flowrate of C is put in the CISOLID substream or the MIXED substream depending on which option is used.\nIn the RGIBBS reactor, we specify the possible products and phases. For C we select the PureSolid option. For CO and CO2 we select the Mixed phases. This specification is the same if using only the MIXED substream or if using the CISOLID substream also.\nThe example contains a sensitivity analysis block which varies the temperature in the RGIBBS block to illustrate the fact the equilibrium CO/CO2 is displaced with the temperature (low temperature favors CO2 while high temperature favors CO).\n\nNote that the same approach can be used to handle other reactions such as coking from CH4 or other hydrocarbons. Just make sure to enter all possible products (H2, H2O) in the list of components and in the products in RGIBBS.", "keywords": null, "reference": null}, {"id": "000092164", "problem_statement": "This Knowledge Base article answers the following question: How do I configure Aspen Manufacturing Master Data Manager to use Dynamic Data?", "solution": "Aspen Manufacturing Master Data Manager (mMDM) is primarily designed to manage data that changes infrequently for an enterprise, which is often referred to as \"master data\", or \"reference data\". But another goal of mMDM is to provide a single source for all information that may be required by subordinate Aspen applications. This includes data that resides in other relational databases or real-time historians, such as Aspen InfoPlus.21. We refer to this type of data as \"dynamic data\" to differentiate it from static master data. Support for dynamic data allows mMDM to act as a data aggregator by providing a single programming interface that gives access to data values throughout an enterprise.\nmMDM supports the following sources for dynamic data:\nReal-time Historians\no InfoPlus.21 via Aspen Process Data\no Other historians supported by Aspen Process Data, such as PI\no InfoPlus.21 via Aspen Enterprise Connect\nRelational Databases\no Microsoft SQL Server\no Oracle\nBuilt-in mMDM Dynamic Data Definitions (tags)\n\nHere are the overall steps for configuring and using dynamic data:\n1. Identify the external data source(s) from which you will be obtaining data. If needed, configure the network communication environment, such as configuring the TCP/IP protocol on the computers, and opening any required firewall ports related to the external data source.\n2. Using the mMDM Advanced Editor, create dynamic data source definitions for each external data source. See section 2, \"Configuring Data Sources\", in the attached document titled: \"Configuring mMDM to use Dynamic Data\".\n3. If the data source is type \"Database\", then define one or more SQL Query definitions. See section, 3, \"Configuring SQL Queries\".\n4. If not already existing, use the mMDM Advanced Editor to create classes that have dynamic class attributes. See section 4, \"Configuring Dynamic Class Attributes\".\n5. Using the mMDM Advanced Editor, create a definition, then apply the class (from step 4) to it. Configure the value property for each dynamic attribute in the class instance. See section 5, \"Configuring Dynamic Class Attribute Values\".\n6. You can also use features in the mMDM Viewer to show the values. See section 6, \"Testing Dynamic Data Values\".\n\nNote: The attachments section also contains the \"Aspen ADSA XML Extractor Tool\", which can be used to obtain the XML text for an ADSA data source. You will also find a document with an example for configuring dynamic class attributes in mMDM that refer to values in an RDBMS database table.", "keywords": "122536-2\nAspen Operations Domain Model (ODM)", "reference": null}, {"id": "000101424", "problem_statement": "What is the Heat Exchanger Network (HEN) in Aspen Energy Analyzer?", "solution": "Heat integration in Aspen Energy Analyzer (AEA) is designed for analyzing and improving the performance of heat exchanger networks (HEN).\nAspen Energy Analyzer focuses on analyzing the networks from an operations as well as a design point of view.\nThe heat exchanger network (HEN) design diagram (also known as Grid Diagram) is an image that displays how individual process and utility streams are matched with one another using heat exchangers.\nFor example, the following process and corresponding Grid Diagram.\n\n\n\n\nTypically, Grid diagram consists of following components. \n\nStreams\nThe Grid Diagram contains two types of streams: process streams (Hor and cold) and utility streams (Hot and cold).\n\na) Hot and Cold Process Streams\nThe process streams in a heat exchanger network can be categorize into two types:\n COLD. A cold process stream is heated up in the heat exchange network. The inlet temperature of a cold process stream is lower than the outlet temperature.\n HOT. A hot process stream is cooled down in the heat exchange network. The inlet temperature of a hot process stream is higher than the outlet temperature.\n\nAspen Energy Analyzer determines the type of stream based on the inlet and outlet temperature of the process stream.\n\nb) Hot and Cold Utility Streams\nThe utility streams in a heat exchanger network can be categorize into two types:\n COLD. A cold utility stream is heated up in the heat exchange network. The inlet temperature of a cold utility stream is lower than the outlet temperature.\n HOT. A hot utility stream is cooled down in the heat exchange network. The inlet temperature of a hot utility stream is higher than the outlet temperature.\n\nHeat Exchangers\nThe Grid Diagram contains two types of heat exchangers: process-to-process and utility.\n\na) Process-to-Process Exchangers\nThe process-process heat exchangers on the HEN appear as gray colored discs in AEA Grid Diagram\nEach disc lays on top of the process stream that flows through the exchanger.\n\nb) Utility Exchangers\nThe utility exchangers on the HEN appear as red or blue colored discs. One disc is laid on top of the process stream that flows through the exchanger, and the other disc is laid on top of the utility stream that flows through the exchanger. Red discs indicates heaters and blue discs indicates coolers in AEA Grid Diagram\n\nSplitters and Mixers\nOn the HEN, a split in a process stream is always followed by a mixer to direct all the branch streams back to one main process stream.\nA process stream can have more than one split-mixer, and a split can have more than two branches.", "keywords": "AEA, Pinch Analysis, Heat Integration, Grid Diagram", "reference": null}, {"id": "000079364", "problem_statement": "The archive information is not stored within a record in the InfoPlus.21 database but rather in shared memory. When the database is stopped, the archive parameters are persisted in the config.dat file. The config.dat file contains the general configuration information for the history system, such as the number of repositories, their location, the number of archives in each repository, etc. When a config.dat file is corrupted or missing, the config.dat file can be recreated manually if a backup is not available.", "solution": "The InfoPlus.21 database can be started without a config.dat file, but if so the database is started with the default repository (TSK_DHIS) which will have three file sets. In this case, additional reposistories and files sets must be recreated. When the database is shutdown, a new config.dat file is created and the new repositories and filesets will be created.", "keywords": "config.dat\nhistory\nFormerly KB 108732-2", "reference": null}, {"id": "000065024", "problem_statement": "Is it possible to call the library property procedures within my own procedures? How to call GPI subroutines in my procedures?\nFor example, I would like to call the subroutine that implements pDens_mol_liq within my own user procedure. How can this be done?", "solution": "The physical property procedures (e.g. pDens_mol_liq) of Aspen Custom Modeler (ACM) Modeler library is implemented as a fortran subroutine. We call this and other subroutines of the same class the GPI (generalized properties interface). Those subroutines are documented in ACM on-line help. These routines are located in a library supplied with ACM, gpp.dll, which itself is calling Aspen Properties subroutines.\nThese routines can be used to evaluate properties inside a user subroutine.\nThe user procedure should be declared with the PROPERTIES option. See the attach file for an example.\nFor the linking of the dll, there's no need to modify the MakeUserCode that is generated by ACM. \n\nLet's dissect the example. For the sake of illustration, let's pretend we want a procedure that returns the liquid molar density in mol/kmol. Let's remember pDens_mol_liq returns the density in kmol/m3. The first step is to figure out what is the name of the subroutine that implements pDens_mol_liq. You can find this in the documentation. If you look at the procedure definition in Modeler library, you can also find the name of the subroutine with analytical derivatives. While undocumented, the subroutines without analytical derivatives are also available, and in our case it is GPILMX. Their declarations are in the library Modeler_NOAPD (found in c:\\program files\\AspenTech\\AMSystem V12.1\\lib). We're going to use this one instead of the one with derivatives to keep the example simple.\n PROCEDURE pDens_Mol_Liq\n // Specific Liquid Molar Density \n CALL : gpilmx;\n LIBRARY : \"gpp.dll\";\n IMPLEMENTATION : SUBROUTINE;\n LANGUAGE : FORTRAN;\n OPTIONS : PROPERTIES;\n INPUTS:\n temperature,\n pressure,\n molefraction(*);\n OUTPUTS:\n dens_mol;\nEND\n Note the one with derivatives has almost the same declaration, but the implementation is GPIDLMX and it has extra arguments to deal with derivatives. We're not going to use this one.\nPROCEDURE pDens_Mol_Liq\n // Specific Liquid Molar Density with Analytic Derivatives \n CALL : gpidlmx;\n LIBRARY : \"gpp.dll\";\n IMPLEMENTATION : SUBROUTINE;\n LANGUAGE : FORTRAN;\n OPTIONS : PROPERTIES, DERIVATIVES;\n INPUTS:\n temperature,\n pressure,\n molefraction(*);\n OUTPUTS:\n dens_mol;\nEND\nThe list of arguments of the subroutine can be obtained by reading carefully the documentation, or by copying the procedure declaration in the Custom Modeling, Procedure folder in Simulation Explorer and generate a a template (from Tools menu, Generate Procedure Code). To call the subroutine, the following code can be used. Note the argument IPROP, with the odd name of \"stream number\" (that's how this was called in SPEEDUP, the previous implementation of ACM). This is set by ACM to the corresponding component list specified in the model's code when calling the procedure. Since we are calling GPI subroutines, we just need to pass that \"stream number\" to the subroutine and not worry any further.\nSUBROUTINE DENSTY(T,P,X,N,RHOL,IFL,IPROP)\nC evaluate molar density using GPI, but return the value\nC in mol/m3 instead of kmol/m3 (this is an example)\n DOUBLE PRECISION T ! temperature (in, K)\n DOUBLE PRECISION P ! pressure (in, bar)\n INTEGER N ! number of components (in)\n DOUBLE PRECISION X(N) ! mole fraction (in, -)\n DOUBLE PRECISION RHOL ! molar density (out, mol/m3)\n INTEGER IFL ! calculation flag (see documentation, in)\n INTEGER IPROP ! stream number (see documentation, in)\n\n CALL GPILMX(T, P, X, N, RHOL, IFL, IPROP)\n RHOL = 1D3*RHOL\n\n RETURN\n END\n\nThis is the implementation of our user procedure, with the following declaration:\nProcedure pDENSTY\n Library : \"libproc.dll\";\n Language : FORTRAN;\n Inputs : temperature, pressure, molefraction(*);\n Outputs : D_MOL_LIQ;\n Options : Properties;\n Call : \"DENSTY\";\n Implementation : SUBROUTINE \"densty.f\";\nEnd\nAnd finally, this is how we can call our user procedure in your model:\nModel DENS_CALC\n RHOL as D_MOL_LIQ (description:\"molar density in mol/m3\", useuomof:\"mol/m3\");\n inlet as input MoleFractionPort;\n // our procedure\n Call (RHOL) = pDENSTY (inlet.T, inlet.p, inlet.z);\n\n // ACM procedure\n rhol_ as dens_mol;\n call (rhol_) = pDens_mol_liq(inlet.T, inlet.p, inlet.z);\nEnd\n\nIt is possible also to call the GPI subroutines from C++ implementation of user procedure. The only difficulty is to figure out the appropriate interlanguage conventions. Assuming you're using the C++ compilations set in ACM MakeUserCode, you can use the macro defined in include file atucfor.h to handle C/Fortran conversion.\nHere's the user procedure declaration for C++. This is pretty much the same as fortran, with the obvious difference that we use \"language: \"C++\"\"...\n Procedure pDENSITY_CPP\n library: \"prop_cpp.dll\";\n call: density_cpp;\n implementation: subroutine \"density_cpp.cpp\";\n language: \"C++\";\n options: properties;\n inputs: temperature, pressure, molefraction(*);\n outputs: D_MOL_LIQ;\nEnd\n\nTo generate the code, the easiest way is to use Tools, Generate User code, then modify the C++ code.\n/*** Code for Procedure Definition pDENSITY_CPP ***/\n#include \"density_cpp.h\"\n#include \n\nEXT_C_AS_C(void) ACM_Print(int status, char *_Zform, ...);\nEXT_C_AS_C(int) ACM_Rqst(int *option, int *iinfo, double *xinfo, char **sinfo );\n\nEXT_VOID_F GPILMX(double *T, double *p, double *x, __int3264 *n, double *rho, __int3264 *pIfail, __int3264 *pPropsId);\n/* Values for pIFail return code */\n#define OK 1\n#define WARN 2\n#define SEVERE 3\n#define FATAL 4\n\nDLL_C_AS_F(void) DENSITY_CPP_C(\n double *pInputs,__int3264 *pInSizes,\n double *pOutputs,__int3264 *pOutSizes,\n __int3264 *pInOffs,__int3264 *pOutOffs,__int3264 *pIfail,\n __int3264 *pPropsId,double *pWork,__int3264 *pWSize,\n double *pDeriv,__int3264 *pNOut,__int3264 *pNIn,\n __int3264 *pICall)\n{\n /* Isolate local integers from FORTRAN-style interface */\n int PropsId = *pPropsId;\n /* Ifail status flag in *pIfail */\n /* Properties identified by PropsId */\n\n /* Procedure input/output arguments access */\n double T = pInputs[pInOffs[0]];\n double p = pInputs[pInOffs[1]];\n double *x = (double *)(&pInputs[pInOffs[2]]);\n int IArg3Dims = 1; \n int IArg3DimsSizes[1] = {pInSizes[0]};\n double *rho = (double *)(&pOutputs[pOutOffs[0]]);\n\n /************** Put your C code here **************/\n __int3264 ncomp;\n ncomp = pInSizes[0];\n GPILMX(&T, &p, x, &ncomp, rho, pIfail, pPropsId);\n *rho = *rho * 1000.;\n *pIfail = OK;\n /*************** End of User C Code ***************/\n}\nNote the use of integer type \"__int3264\", instead of \"int\". This is needed to get the right type of integer. Knowing that Fortran call convention is passing argument by reference (pointers), the code should be straightforward to compare to the previous fortran implementation.\nThe complete example (fortran and C++) is attached.\n\nKeyWords\nGPI, GPP, properties", "keywords": null, "reference": null}, {"id": "000101772", "problem_statement": "This article described what are the file types that can be imported into IQ Model", "solution": "IQModel can import different types of text-based files, including txt (tab delimitated and csv), clc files and vec files.\n\nNevertheless, in the case of the of the text files ideally, we have to make sure they are tab delimitated and make sure no special characters or punctuation signs exist, otherwise this can lead to problems importing the file or even crashing the application.\n\nIn the case of CSV files (also with regular excel files), an easy way to make sure they will be read as Text Tab Delimited files is to save the files using that format.\n\nThis could be done following some easy steps:\n\n1.- Open the CSV file using Excel\n2.- On excel select File > Save As.. >> Browse\n3.- When the save file prompt appear make sure to select Text (Tab Delimitated) on the \u201cSave as Type:\u201d drop down list\n4.- just as a final health check, open the saved txt file with notepad and verify no commas or other punctuation character are in the file.", "keywords": "DMC3, OPC, Delta V", "reference": null}, {"id": "000091502", "problem_statement": "When the chemical reaction rate is significantly faster than the mass transfer through a stationary liquid or vapor film, we have a mass transfer controlled system. In this case, because of the very fast chemical reaction rate, we should not consider a linear concentration profile between the bulk and the interface. Consequently, it is important to discretize the liquid or vapor film to compute an accurate concentration profile; typically, a log shaped profile.\nUpon selecting the option to discretize the liquid and/or vapor film, under Pack/Tray Rating | Rate-based | Rate Based (tab) (Legacy hydraulics) or in Rate-Based Modeling, Rate-based Setup, Sections tab when using Column Internals, how do we plot concentration profiles of a liquid and/or vapor film?", "solution": "The purpose of film discretization is to deal with very fast reactions in the film and therefore it is only turned on for rate-based stages with reaction.\n\nYou need to increase the diagnostic level to 5 or more before running the simulation in the RadFrac block, Specifications, Block Options, Diagnostics:\n\n\n\nWe can see the values of the concentration profiles of the liquid and/or vapor discretized film, variable XYFM, in the History file.\n To view the History file, go to the Home tab of the Ribbon, click History button.\n Assuming that:\nn = number of stages with chemical reaction;\ndl = number of discretization points - liquid film;\ndv = number of discretization points - vapor film;\n then we have: n*dl + n*dv number of lines. The values are shown in the following order:\nLiquid internal discretization points for stage 1 if applicable (rate-based stage with liquid discretization)\nVapor internal discretization points for stage 1 if applicable (rate-based stage with vapor discretization)\n Liquid internal discretization points for stage 2 if applicable\nVapor internal discretization points for stage 2 if applicable\n Liquid internal discretization points for stage 3 if applicable\nVapor internal discretization points for stage 3 if applicable\n:\n:\n For each stage/phase combination, the discretization point next to the interface is shown first, whereas the one next to bulk is shown last.\n Please note:\n(i) The concentration in the bulk, XI and YI, and at the interface, XINTF and YINTF, are reported separately.\n(ii) If we increase the number of stages without extending the tray/pack-rate section, these additional stages are set to be equilibrium stages, and hence, the film in these stages is not discretized.\n(iii) There are more sections in the history log but those are undocumented, they are for our internal debugging purpose only and may not be useful to our users.\n\n1. RD3SLV - internal subroutine name where the variables are printed from\n2. XI - liquid mole fraction at the vapor-liquid interface\n3. KVL - K-values at the vapor-liquid interface\n4. RL - internal variable related to liquid flow fraction\n5. PREKL - internal variable related to liquid mass transfer coefficients\n6. XYFM - internal variable related to mole fractions in the film\n7. FLUXFM - internal variable related to the mass transfer flux in the film\n8. BRXTNT - internal variable related to reaction extents in the bulk phase\n9. FRXTNT - internal variable related to reaction extents in the film\n10. EPOTN - internal variable related to electrical potential \n11. RSRD3SLV - internal subroutine name where the variables are printed from\n12. RSRD3SLVUF - internal variable related to feed enthalpy\n13. RSRD3SLVQ - internal variable related to heat duty\n14. RSRD3SLVWL - internal variable related to liquid side draw\n15. RSRD3SLVWV - internal variable related to vapor side draw", "keywords": "RadFrac, Rate-based, discretization, film, reaction", "reference": null}, {"id": "000064607", "problem_statement": "Does AspenTech support sharing the License Manager with other 3rd party application licenses?\n\nScenario\n\nWhen trying to install AspenTech License on the same server as another 3rd party application license.", "solution": "AspenTech does not support sharing the License Manager with other 3rd party application licenses.\n\nAs AspenTech's best practice, when having an Aspen license and another 3rd party application license, you should have them installed on separate license servers. \n\nWe do not recommend having two different licenses on the same server because it can cause conflicts with our software and it will report inaccurate usage logs.", "keywords": "Sentinel RMS, License Manager, SLM", "reference": null}, {"id": "000101404", "problem_statement": "This article will describe some settings that need to be checked to enable the IQ history data collection on the AspenWatch Database", "solution": "IMPORTANT NOTES:\nIQ Watch is only available from V11. Previous version do not support this features\nThe IQ Watch Collection requires additional Tokens and License Key. During the deployment this is checked and if the License is not available IQ will failed to deploy.\nThis attached PDF File will show a Step by Step guide on how this configuration can be done.\nThe procedure is break into three parts:\n1.- AspenWatch Server Check\n2.- Online Server Check\n3.- Verification on PCWS\n\nPlease contact AspenTech Support if you may have further questions or encounter any further problem", "keywords": "IQ, AspenWAtch, PCWS", "reference": null}, {"id": "000101403", "problem_statement": "Can we use \".vue\" file(form sp3d) or 'nwd'file Aspen OptiPlant 3D Layout?", "solution": "No, Aspen OptiPlant does not read \".vue\" file(form sp3d) or 'nwd'file files directly.\n\nHowever, both files can be exported to DXF from SP3D or Navis work, and can be read in OptiPlant.", "keywords": "SP3D, OptiPlant, files, format", "reference": null}, {"id": "000101400", "problem_statement": "How to get rid of the following error message \"Utility Hydrate Formation is not available with the property package being used\" in Aspen HYSYS?", "solution": "In order to get rid of the following error message:\n\n\nYou will need to use Peng Robinson or SRK fluid packages from the HYSYS databank as the Hydrate utility only works with PR & SRK.", "keywords": "Aspen HYSYS, Error, Hydrate Utility", "reference": null}, {"id": "000101399", "problem_statement": "How to get rid of the following error message \"STREAM INITIAL FLASH FAILED. SEE HISTORY FILE FOR MORE INFORMATION\" in Aspen Plus?", "solution": "In order to get rid of the following error message \"STREAM INITIAL FLASH FAILED. SEE HISTORY FILE FOR MORE INFORMATION\", you will need open the inlet stream, and change \"Flash Options\" to \"Liquid Only\" and make sure to add H2O as solvent, then re-run the simulation.", "keywords": "Aspen Plus, Error, FLASH FAILED", "reference": null}, {"id": "000100762", "problem_statement": "What is the AspenTech System Name and How to Find it", "solution": "A System Name is a unique identifier for each license. This identifier helps both customers and AspenTech keep track of the licenses being issued. This knowledge base article describes how to locate the System Name in your license file which you might require when submitting the license request or media upgrade request from the AspenTech support website.\n\nMethod: 1\nGetting the system name from the license file directly\n\n- Right click on your license file and select open with \"Notepad\".\n- After opening the license file in notepad, scroll to the extreme right end of the file.\n- Now, you will be able to see an entry called \u201cSystem Name: xxxxxxxx.x\u201d which is your license file system name. \n- Refer the screen shot for details\n\n\n\n\nMethod: 2\nGetting the system name from the aspenONE SLM License Manager (SLM)\n\n- Open the aspenONE SLM License Manager (SLM).\n- Click on the Load Server Details\n\n\n\nNow you will be able to see your current license file information.\nWhere \u201cSystem : xxxxxxxxx\u201d represent your license file System name.\nRefer the below screen shot for more details.", "keywords": "SLM\nSystem Name\nLicense Key Request \nLicense file", "reference": null}, {"id": "000095379", "problem_statement": "Is there a way to tell when Aspen InfoPlus.21 (IP.21) was last started?", "solution": "Yes! Open the Aspen InfoPlus.21 Administrator and navigate to the SAVETaskDef definition and look for the child record called TSK_SAVE (or simply search for TSK_SAVE directly).\n\nThe fixed area of TSK_SAVE has the fields\n\n LAST_LOAD_TIME\n and\n LAST_SAVE_TIME\n\nThese fields provide the time when the IP.21 database was started and the last time when a snapshot was saved, respectively.", "keywords": "IP.21 start\nLoad time\nSave time\n 128321-2", "reference": null}, {"id": "000101394", "problem_statement": "MB Package Files", "solution": "Before proceeding, first please ensure you install Emergency Patch 12 (or later) for Aspen HYSYS V14 via the following link:\nhttps://esupport.aspentech.com/apex/S_SoftwareDeliveryDetail?id=a0eDp000000vVEpIAM\n\nEmergency Patch 12 (EP12) is particularly focused on molecule-based models. Specifically, it addresses issues which may occur during the propagation of a molecular profile among multiple units in the flowsheet (e,g., recycle stream, entering new pure component streams, update MB superset).\n\nThe MB self extractor package addresses the mass balance and numerical precision in MB models for the selected scenario. After installing patch EP12 first per the link above, you may then download the MB package attached to this article. It may be installed by running the self-extractor package with Administrator Permission. This process will overwrite files in the installation folder\\refsys\\refreactor.\n\nPlease see attached instruction file and extractor package.\n\nIn the future, the MB package files may be updated for repairs and improvements. Return to this article to find the most recent version of the files.", "keywords": null, "reference": null}, {"id": "000100161", "problem_statement": "When attempting to log into Aspen Cloud Connect with Windows Authentication enabled, the following error messages in screenshots below may be encountered.\n\n\nOr", "solution": "This document explains what to do when you failed to login to Aspen Cloud Connect when Windows Authentication is enabled. We can use Windows tool - LDP.exe to check communication to the LDAP Server with specific credentials. In a typical server installation, the windows tool - LDP.exe tool may not have been installed. \n\n\n\nThe following steps can be used to install the windows LDP tool:\n1. Open the Server Manager tool in Windows\n2. Navigate to Roles configuration setting\n3. Select the Add Roles link\n4. Work through the Add Roles Wizard\n5. Check the Active Directory Lightweight Directory Services.\n\n\n\n\n\nOnce the installation is completed, open LDP.exe from Windows \u2013 Run command. \nThe file can be found in C:\\Windows\\System32 folder\n\n\nClick on Connection and select Connect.\nEnter the LDAP Server hostname (FQDN \u2013 Fully Qualified Domain Name)\nLDP connection \u2013 Port 389\n\n\nOr\n\nLDPS connection \u2013 Port 636 (check the SSL checkbox) \n\n\nA successfully connection as per below screenshot\n\n\nIf it fails, contact the Customer\u2019s IT Team to investigate if there is any network restriction between the Aspen Cloud Connect Server and the LDAP Server.", "keywords": "Windows Authentication\nLDP", "reference": null}, {"id": "000083429", "problem_statement": "Can I open higher version of an Aspen Plus backup (.bkp) file from a lower version? For example, how can I open a file from V14 in V12.1?", "solution": "There is a trick that can be used to open files in older versions. This feature is not supported, and it does not work for some files. New features will not supported. It is up to the user to check the file.\nFiles can sometimes be opened after saving file by downgrading the version number in a text editor such as Notepad.\nFor example, if user has A.bkp version V14 file that they want to open in version V12.1. \nOn the file \"A.bkp\", right click and Open with Notepad and it will display below information. MM \"40.0\" FLAVOR \"NO\" VERSION \"40.0\"...\nChanging the number from the higher version to the lower version. For example, change \"40.0\" to \"39.0\". Use the table below to find the version number. e.g. MM \"39.0\" FLAVOR \"NO\" VERSION \"39.0\"...\nChange all references to APED from the higher version to the lower version. For example, change V140 to V121. Go to the Edit menu and select Replace. There will be many occurrences if APED was used for the databanks.\nIf a more recent version of the PURE databank was used, change it to the most recent version for the version you want to use. For example, change PURE40 to PURE39 for V12.1.\nFind the string \"$_APWNSHELL_SETTINGS\" and remove the entire section to the next \"#\" character leaving only one \"#\" before the next section \"PFSVData\".\nRemove the ADS section. Find the string $_ADS_FILE and remove the entire section to the next \"#\" character.\nSave it as a different name \"B.bkp\" format and open it from V12.1.\n\nPlease NOTE the following limitations:\nColumn Analytics defined V9+ cannot be back converted to earlier versions without error, but offending blocks can be re-defined after conversion.\nStream-Groups for Aspen Batch Modeler were added in V10 (Boundary and Charge), and will have to be deleted after conversion.\n\nA utility program that automates this procedure is available in the following solution document:\nConvert File Version Utility for Aspen Plus\n\n\nVersion indicated numbers.\nVersion MM VERSION APED Prefix New PURE databank\nV14.0 40.0 40.0 APV140, NISTV140, APESV140 PURE40\nV12.1 39.0 39.0 APV121,\nNISTV121, APESV121 PURE39\nV12.0 38.0 38.0 APV120,,\nNISTV120, APESV120 PURE38\nV11.0 37.0 37.0 APV110,\nNISTV110, APESV110 PURE37\nV10.0 36.0 36.0 APV100,\nNISTV100, APESV100 PURE36\nV9.0 35.0 35.0 APV90,\nNISTV90, APEOSV90 PURE35\nV8.8 34.0 34.0 APV88 PURE32\nV8.6 32.0 32.0 APV86 PURE32\nV8.4 30.0 30.0 APV84 PURE28\nV8.2 28.0 28.0 APV82 PURE28\nV8.0 27.0 27.0 APV80 PURE27\nV7.3.2 26.0 26.0 APV732 PURE26\nV7,3 25.0 25.0 APV73 PURE25\nV7,2 24.0 24.0 APV72 PURE24\nV7.1 23.0 23.0 APV71 PURE22\nV7.0 22.0 22.0 APV70 PURE22\nV2006.5 21.0 21.0 APV065 PURE20\nV2006 20.0 20.0 APV06 PURE20\n \nFixed in Release\nStarting with version V7.2, Aspen Plus will allow you to open backup files from newer versions, with a warning that features not supported in this version will be lost in doing so. However, databanks are likely to be missing.", "keywords": "Version, convert, higher version, lower version", "reference": null}, {"id": "000083568", "problem_statement": "When using RGibbs with a restricted equilibrium, how many reactions are needed?", "solution": "If the Calculate Options in RGibbs is set to Restrict chemical equilibrium, RGibbs will restrict chemical equilibrium either for the entire system or specified equations using a specified temperature approach to equilibrium or reaction extents. If you have RGibbs consider only a specific set of reactions, you must specify the stoichiometric coefficients for a complete set of linearly independent chemical reactions, even if only one reaction is restricted.\n\nWith the Temperature approach or molar extent for individual reactions option, if you do NOT specify molar extent or temperature approach (that is, all reactions are set to the default 0 temperature approach) then RGibbs ignores the reactions. In this case, no restrictions are enforced on the reactions specified.\n\nIf any reaction in the network as a linear combination of the other reactions, then the set of reactions is not independent.\nA set of reactions is linearly independent if and only if the determinant of the Jacobian matrix is nonzero. If the determinant is zero, then the set of reactions is linearly dependent.\nThe number of linearly independent reactions required generally, but not always, equals the total number of products in the product list, minus the number of atoms present in the system. The reactions must involve all participating components.\nNrxn = Nprod - Natoms\nNrxn is the number of reactions\nNprod is the number of products\nNatoms is the number of atoms\nREquil does not need to have a complete linear set of chemical reactions.\n\nChemistry will be checked if the set of reaction is linearly independent and generate a warning similar to the following if they are not.\n\n * WARNING\n THE FOLLOWING REACTIONS ARE LINEARLY DEPENDENT IN CHEMISTRY \"GLOBAL\",\n AND MAY CAUSE CHEMICAL EQUILIBRIUM CALCULATIONS TO EITHER FAIL OR\n CONVERGE TO INCORRECT RESULTS:\n For more information on the independence of chemical reactions see the following notes:\n\nStoichiometry of Chemical Reactions by James B. Rawlings Department of Chemical and Biological Engineering University of Wisconsin-Madison", "keywords": null, "reference": null}, {"id": "000100998", "problem_statement": "This knowledge base article illustrates how to uninstall any AspenTech software in V14", "solution": "If you need to uninstall a software of AspenTech, you can use the AspenOne Manager to do so.\nHere is a step-by-step guide on how to use the AspenOne Manager in version 14.\nStep 1: Open the AspenOne Manager\n\nTo open the AspenOne Manager, navigate to the Start Menu and search for \"AspenOne Manager\" or you can find it under Aspen Configuration Folder in the start Menu. Click on the program to open it.\n\n\n\n\nAnd Launch the Uninstaller: \n\nStep 2: Select the component to uninstall.\nOnce the Uninstaller is open, you will see a list of all the components that are currently installed on your computer. Select the one you want to uninstall by clicking on the checkbox next to its name.\n\n\n\nStep 3: Uninstall the component:\nAfter selecting the component, you want to uninstall, click on the \"Uninstall\" button at the bottom of the window. This will start the uninstallation process.\n\nStep 4: Follow the prompts:\nDuring the uninstallation process, you will be prompted to confirm that you want to remove the selected component. Follow the prompts to complete the uninstallation process. You may be asked to restart your computer after the uninstallation is complete.\n\nStep 5: Verify the component has been uninstalled:\nAfter the uninstallation process is complete, you should verify that the component has been removed from your computer. To do this, navigate to the Start Menu and search for the component. If it no longer appears in the search results, then it has been successfully uninstalled.\n\nIn summary, the AspenOne Manager is a straightforward tool that allows you to uninstall components of the AspenOne suite and other functionality. By following these steps, you can easily remove any unwanted or outdated components from your computer.", "keywords": "V14, Uninstall, AspenOne Manager", "reference": null}, {"id": "000049932", "problem_statement": "Which folders should be excluded from virus scanning on a machine running Aspen Advanced Process Control (APC) Software Suite?", "solution": "The following folders and directories may be excluded from virus and malware scanning software on any machine running Aspen APC software. By excluding certain directories and files, you can avoid occurrences of false \"positive\" identifications of alleged malware that may occur during normal operation of AspenTech software.\n\nYou can exclude by folder (faster to implement with fewer entries) or by file (more effort to implement but better granularity of exclusions). \n Files / Folder to Exclude Reason for Exclusion\nProgram Files (x86)\\AspenTech\\APC\\Online\\bin\\*.exe ACO platform DMC/DMC3 controllers running late or experiencing \u201clock timeout\u201d errors\nProgramData\\AspenTech\\APC\\Online\\sys\\...\\*.*\nProgramData\\AspenTech\\APC\\Online\\app\\...\\*.*\nProgram Files (x86)\\AspenTech\\Cim-IO\\code\\*.exe Cim-IO performance issues\nProgram Files (x86)\\Common Files\\OPC Foundation\\*.exe Direct OPC communication issues between Cim-IO, RTE, InfoPlus.21 or GDOT\nProgram Files (x86)\\Common Files\\OPC Foundation\\*.dll\nProgram Files (x86)\\Common Files\\AspenTech Shared\\*.dll Cim-IO performance issues\nProgram Files (x86)\\AspenTech\\RTE\\...\\*.exe *.dll RTE platform controller performance issues\nProgramData\\AspenTech\\RTE\\{version}\\*.* RTE platform controller data file issues\nC:\\windows\\temp\\tmp*.tmp Windows Defender may incorrectly flag these temporary files that are created and deleted by the RTE controller every cycle. Only exclude if necessary.\nProgram Files (x86)\\AspenTech\\APC\\Performance Monitor\\*.exe Aspen Watch related performance issues\nProgram Files (x86)\\AspenTech\\Cim-IO\\io\\...\\*.exe Aspen Watch controller data collection performance issues (ACO platform)\nProgramData\\AspenTech\\APC\\Performance Monitor\\App\\\\*.* Aspen Watch controller data collection issues (both RTE and ACO)\nProgram Files (x86)\\AspenTech\\InfoPlus.21\\c21\\h21\\bin\\*.exe Aspen Watch InfoPlus.21 historian performance issues (InfoPlus.21 32-bit or 64-bit)\nProgram Files\\AspenTech\\InfoPlus.21\\c21\\h21\\bin\\*.exe\nProgram Files (x86)\\AspenTech\\APC\\Web Server\\*.exe APC Web server related performance issues\nC:\\Windows\\SysWOW64\\inetsrv\\w3wp.exe APC Web server related performance issues (these processes host IIS web applications like APC Web Server, Aspen Local Security, and ProcessDataREST)\nC:\\Windows\\System32\\inetsrv\\w3wp.exe\n\\inetpub\\wwwroot\\AspenTech\\ACOView\\CVimages\\*.png APC Web server related performance issues\n\\inetpub\\wwwroot\\AspenTech\\ACOView\\PIDimages\\*.png\n\\inetpub\\wwwroot\\AspenTech\\Afw\\Security\\pfwauthz.asp\n\\inetpub\\wwwroot\\AspenTech\\Afw\\Security\\pfwauthz.aspx\n\\inetpub\\wwwroot\\AspenTech\\Afw\\Security\\pfwauthz.aspx.vb\nProgram Files (x86)\\Common Files\\SafeNet Sentinel\\*.exe SLM Components", "keywords": "virus, scan, antivirus, apc, exclude", "reference": null}, {"id": "000101383", "problem_statement": "When trying to load an Inferential Qualities application, APC Manage or PCWS throws an error \u201cIQP: Error opening IQF file\u201d.", "solution": "Here is a screenshot of the error described:\n\n\n\nWhat this error means is that when searching for the .iqf configuration file, the Inferential Qualities service is not able to find it on the expected location. There are a few basic requirements for the IQ service to be able to find the file:\nOn the APC Online server, there must be a folder for the application under C:\\ProgramData\\AspenTech\\APC\\Online\\app .\nThe .iqf file and the folder must have the same name.\nIf the .iqf has a different name from the folder (because there is more than one version for example), this must be designated on the \u201cAlternate IQF name\u201d box when loading the application.\n\nIf any of these requirements is not met, you will get the previously described error. For a more in-depth explanation of the functioning of this tool, please review the APC Manage Help section.", "keywords": "IQ, inferential qualities, load, error, APC Manage", "reference": null}, {"id": "000101386", "problem_statement": "When trying to install the component required to view ActiveX history plots on the APC Web Interface, Windows blocks the required .cab file, stopping users from accessing the ActiveX plots.", "solution": "Before the solution is provided, a few caveats to understand the scope of this solution:\nInternet Explorer is no longer supported (as of V14).\nIt is still possible to select ActiveX under the PCWS preferences as the default history plot tool.\nActiveX only works on Internet Explorer.\nTherefore, this workaround is meant only for situations where Internet Explorer & ActiveX need to be used because of operating environment limitations.\nHere is a one possible screenshot of the previously mentioned error, when trying to install the component required to view ActiveX plots:\n\n\n\nTo solve this or other issues where the ActiveX plot component shows an empty container, these are the resolution steps:\nON the WEB SERVER: Download the attached zip file, make sure it is unblocked (right click -> Properties and check the UNBLOCK checkbox if one is shown), then extract the two files on the web server machine. Copy and replace (if applicable) the two files to:\n\nC:\\inetpub\\wwwroot\\AspenTech\\AspenCUI\\\n ON the Browser Workstation: Clear the browser history and cache on Internet Explorer.\nClose Internet Explorer and run it again now as Administrator.\nSet ActiveX as the History Plot option (Preferences tab on PCWS).\nAdd the current site to the Internet Explorer Trusted Sites list.\nOn PCWS, go to the Operations/Engineering view and open the History Plot for one of the variables.\nIt should prompt a message that says the website is trying to install the \u201cAtWebControls.cab\u201d add-in, select Install when asked (if this does not work, go to step 9).\n \n\n\n After the installation is complete, close the window and reopen the History Plot, the ActiveX plot should pop up now.\n \n (ONLY if add-in installation fails): Open the Manage add-ons window of Internet Explorer and locate the \"AspenTech Container Control\" extension, then choose Enable.\nExit Internet Explorer and restart it running as Administrator.\nGo back to step 6 and retry.\nIf the ActiveX plots do not work after trying these steps, then there may be additional security issues that can no longer be resolved due to Microsoft updates and security restrictions. Unfortunately, AspenTech can no longer provide support for troubleshooting these cases, we suggest using one of the supported web browsers https://www.aspentech.com/en/platform-support and choose Web.21 HPT or AspenOne Process Explorer History Plot options from the PCWS Preferences tab.", "keywords": "PCWS, history plot, ActiveX, Internet Explorer, add-in, AtWebControls", "reference": null}, {"id": "000100152", "problem_statement": "When opening OPC UA Explorer the following message is shown: No servers detected on the host", "solution": "Go to the following path: C:\\ProgramData\\OPC Foundation\\UA\\Discovery\\pki\\own\nOpen the ualdscert.der file, go to the Details tab and check the Valid to field. If the certificate is expired, proceed to follow the next steps.\nFor example, the certificate on the following image is expired.\nOpen Services and stop the OPC UA Local Discovery Server service.\nRename the old certificate file as old_ualdscert.der (as shown below).\nRestart the OPC UA Local Discovery Server service. This should automatically create a new certificate file (as shown below).\n\n\n\n\n\nNOTE: the dates on the images are set as an example. Your system should show different dates.\nOpen your Aspen InfoPlus.21 Manager and stop TSK_OPCUA_SVR.\nOpen a new file explorer and go to the following path: C:\\ProgramData\\AspenTech\\DiagnosticLogs\\InfoPlus.21\\OpcUaServer\n\n Delete IP21OpcUAServer.log.txt and IP21OpcUAServerHost.log.\nOpen IP.21 Manager and restart TSK_OPCUA_SVR. This should regenerate the two files in the previous step.\nOpen OPC UA Explorer. It should now be able to Discover Servers automatically.", "keywords": null, "reference": null}, {"id": "000083956", "problem_statement": "Aspen Plus may have problems converging tear streams when using the True components approach.", "solution": "Numerical methods that use acceleration or dampening (Newton, Broyden, and Wegstein) may accelerate only one ion of a salt, resulting in a charge imbalance. Sometimes the flowsheet may recover from this problem, but most often the unit operation blocks will fail to converge once their feed streams are out of charge balance.\nThere are several possible workarounds:\nUse the apparent component approach. The component approach for the flowsheet is controlled by a checkbox on the Properties/Specifications form.\nChange the choice of TEAR streams to a stream that is all vapor. This is a good idea for any electrolyte system, because ions and salts are not volatile. (It is generally better to choose tear streams with smaller numbers of components.) To force Aspen Plus to use a specific tear stream, select it on the Convergence / Tear form.\nUse the DIRECT method for TEAR stream convergence. This will likely require a large increase in the number of iterations needed to converge. (Start with 200.) To change the convergence method for all tears on a flowsheet, go to the Default Methods sheet of the Convergence / Conv Options form and choose Direct for tears. To globally change the number of iterations allowed, go to the Convergence / Conv Options / Methods form, and on the Direct sheet change the maximum number of flowsheet evaluations to some higher number. To force one specific tear stream to be converged by direct substitution you will need to create a convergence block of type Direct for the tear stream. The number of iterations used for that specific direct substition convergence block is controlled on the Parameters sheet of the convergence block Input form.\nUse the default WEGSTEIN method for tear stream convergence, but set the lower bound and upper bound (QMIN and QMAX) equal so that all components are updated with the same acceleration value.\nPut a charge balance (CHARGEBAL) block after the TEAR stream. The charge balance block will enforce the charge balance on the stream.", "keywords": null, "reference": null}, {"id": "000079280", "problem_statement": "This knowledge base article explains how to view the full contents of the config.dat file.", "solution": "Config.dat is a file containing repository parameters including file set paths, file set sizes, and other history parameter settings. This file is updated by TSK_H21T when the database is shut down normally. The historian parameters are reloaded into a shared memory space during database startup.\nOpening the config.dat file with Notepad or Wordpad will only reveal a portion of the file because it is a binary file. In order to view the contents of the entire file, go through the following procedure. \nStop Aspen InfoPlus.21 (IP.21) database and modify the task TSK_DBCLOCK by adding the VERBOSE parameter to the Command Line Parameters box. After restarting IP.21 you can view the entire contents of the config.dat file in the TSK_DBCLOCK.OUT file (located in the group 200 folder) which can also be viewed from the IP.21 Manager.\n NOTE Procedures for editing config.dat file and changing the path names can be found in knowledge base article: Executable that edits the file paths in the historian config.dat file - h21chgpaths.exe", "keywords": "config.dat\ncontents", "reference": null}, {"id": "000101866", "problem_statement": "Some users are looking for guidance on how to calculate the rate of change (ROC) of a variable over a specified time interval. They want to store and update previous measurement values and create a user-defined variable to track the calculated rate of change. This article provides instructions on how to achieve this in Aspen DMC3 builder.", "solution": "Step 1: Create User Entries for Previous Measurements\n Open the Aspen DMC3 interface and navigate to the appropriate control module or block where the rate of change calculation needs to be implemented.\nCreate user entries for each of the previous measurement values that need to be stored. The number of user entries determines how many minutes of past values can be stored. For example, creating 10 user entries allows storing 10 minutes of past values.\n\n\nStep 2: Calculate the Rate of Change (ROC)\n In the output calculation section of your control module, set the previous measurement equal to the earlier measurement. This ensures that older measurements are updated before newer measurements.\nCreate a user-defined variable, let's call it \"ROC,\" to hold the calculated rate of change.\n\n\nStep 3: Implement the Rate of Change Calculation\n In the output calculation section, calculate the rate of change by subtracting the measurement from the desired number of minutes ago from the current measurement. For example, to calculate the rate of change per 1 minute, subtract the measurement from 1 minute ago from the current measurement.\nAssign the calculated rate of change to the user-defined variable \"ROC.\"\n\n\nStep 4: Update the Measurement of Rate of Change (ROC) Controlled Variable (CV)\n Locate the controlled variable (CV) that corresponds to the rate of change (ROC) you want to monitor.\nSet the measurement of the CV equal to the calculated rate of change stored in the \"ROC\" user-defined variable.", "keywords": "DMC3 builder, Calculation, ROC, Rate of change, User defined", "reference": null}, {"id": "000101378", "problem_statement": "You've set up your InfoPlus.21 server endpoint in Cloud Connect, but when you go to Manage Server Tags, not all of your tags show up in the tag list.", "solution": "If your tags belong to a custom definition or a definition other than IP_AnalogDef, IP_DiscreteDef, or IP_TextDef, you'll need to make a few changes in InfoPlus.21 for Cloud Connect to be able to view and read your tags. The steps below show you how to do this using PMCAnalogDef and PMCDiscreteDef records.\n\n1. Open InfoPlus.21 Administrator and expand PE_BranchDef, expand IP_TagsBranch, and go to IP_TagsBranch repeat area and add PMCAnalogDef and PMCDiscreteDef. \n\n\n\n2. Now go to AtMapDef and ensure that MAP_Quality is not empty for these Map Records: PMCAnalogMap, PMCAnalogRawMap and PMCDiscreteMap.\n\n\n\n\n\n\n\n3. The last thing to check is the MAP_DefinitionRecord field. Make sure that in PMCAnalogMap the MAP_DefinitionRecord is PMCAnalogDef and do the same for PMCDiscreteMap where the value should be PMCDiscreteDef.\n\n\n\n\n\n4. Go to Cloud Connect and edit the InfoPlus.21 end point, click Update Server.\n\n\n\nNow you should have a higher tag count which includes the tags belonging to your custom definition.", "keywords": "Cloud Connect\nIP.21 Tags\nCustom Tags\nMissing Tags", "reference": null}, {"id": "000101380", "problem_statement": "Fidelis Models are not Generated by Aspen Mtell Alert Manager\nFidelis uses a SQL database engine (called SQL LocalDB) and it runs as a process owned by the current user (meaning each user runs their own instance of LocalDB). Therefore, it is difficult to have shared LocalDB instance between different users. Aspen Fidelis user must use the same user credential as Mtell Alert Manager while installing Fidelis.\nIf Mtell Alert Manager and Fidelis were installed with different user credentials, Fidelis models will not be created.", "solution": "To solve this issue, create your own LocalDB instance following next steps:\nOn the machine with Fidelis and Mtell Alert Manager installed, open Command Prompt and stop the Fidelis V14 database instance executing next command:\nsqllocaldb stop AFRInstace39\nDelete the instance using next command:\nsqllocaldb delete AFRInstance39\nCreate and start the instance with the following specified version (SQL LocalDB 2019):\nSqllocaldb create AFRInstance39 15.0 -s\nOnce created, follow the next command to verify your ownership over Fidelis database instance. \u201cOwner\u201d property should display your username.\nSqllocaldb info AFRInstance39\n\n\n\n\nFixed in version\nDefect 925428 Fixed in Version V14.0.3", "keywords": "Models failed\nNo models\nGeneration error", "reference": null}, {"id": "000096651", "problem_statement": "Why replication tags are not created in Subscriber?", "solution": "Follow problems may cause the issue that replication tags are not created in Subscriber IP.21 database:\nThe connection is not blocked in firewall\nThe IP_Host_Name of the RepPublisherDef tag in Subscriber IP.21 database is not match the host name of Publisher\nThe IP_Host_Name of the RepSubscriberDef tag in Publisher IP.21 database is not match the host name or IP address of Subscriber\nThe Definition level of the specified tag does not exist in the Subscriber IP.21 database.\nDo the troubleshooting as follow:\nExecute command netstat -ano|findstr \"1801\" at the Command Prompt in Publisher.\n\nAt here, there's an established connection which source IP address is Publisher, destination IP address is the Subscriber and port is 1801.\nIf there's no such result, check whether the outbound traffic for remote TCP1801 is allowed. Create a allowing rule if your firewall blocks outbound by default.\nExecute command netstat -ano|findstr \"1801\" at the Command Prompt in Publisher.\n\nAt here, there's an established connection with source IP address is the Subscriber and port is 1801, destination IP address is the Publisher.\nIf there's no such result, check whether the inbound traffic for local TCP1801 is allowed. Create a allowing rule if your firewall blocks inbound by default.\nThere's message WARN [MessageReceivedEventHandler] - Received update from an unauthorized host APC. Update will not be processed in the TSK_SUBR.out in the Subscriber\nThat means the IP_HOST_NAME of the RepPublisherDef tag is not match to the Publisher server name. You may either input the wrong name or use the IP address. Turn the Active_SW off, update the IP_Host_Name and turn Active_SW back to on.\nThere's message WARN [QueueMissingTag] - Tag DCOL_DRAWTPV does not exist and will send 'missing tag' message to APC. in the TSK_SUBR.out in the Subscriber\nThat means the definition level for the tag DCOL_DRAWTPV is not found in the Subscriber IP.21 database. Please create all depended definitions for this tag.", "keywords": "InfoPlus.21, Replication, Subscriber", "reference": null}, {"id": "000101375", "problem_statement": "On the PCWS, user can download an ACO CCF Snapshot for historical values through Aspen Watch Inspector tool to be used in offline simulation. How can this be accomplished for DMC3 RTE applications?", "solution": "The currently released versions do not have an exact same function as saving a CCF Snapshot from Inspector to create a *.DMC3application file. There are some challenges in implementing this feature at this time but there is an enhancement request in place (APC-I-1420), and it is in the long-term road map. \n\nIn the meantime, below are two workarounds that the user can do to accomplish a similar result for now: \n\nSolution #1 - Save Tuningset from Inspector and use Manage Tuning feature in DMC3 Builder Simulation\n\nNote: this workaround does not include the Measurement values of the variables, only the tuning parameters. \n\n1. On the PCWS > History tab > Controller name > Overview (or from variable dropdown menu), click on Inspector.\n\n2. Click \"CCF Snapshot\" at the top to open the dialog window, choose the timestamp you want to get the values from, and then instead of saving the CCF, you can use the dropdown menu for \"Type\" to save the Tuningset: \n \n\n2. Open a DMC3 Builder project and get a current time snapshot of the application from the Online view. \n\n\n3. Then in DMC3 Builder's Controller Simulation View, click on \"Manage Tuning\" in the top tool's ribbon, click Import and select the downloaded Tuningset to bring in the list here. \n\n\n\n\n4. Then click on the column header for the Tuning Set that was just imported (such as Tuning Set 1) to highlight the column values and click \"Apply\" on the right side. This will push all of the tuning to the simulation settings. \n\n\n\nSolution #2 - Save CCF Snapshot from Inspector and use Copy Tuning feature in DMC3 Builder\n\n1. Open Inspector on the PCWS, click \"CCF Snapshot\" at the top to open the dialog window, choose the timestamp you want to get the values from, and keep the \"Type\" as CCF this time. \n\n2. Open a DMC3 Builder project file and Get Snapshot from current time. \n\n3. Right-click on the controller snapshot's Master Model node, click Export, change file type to be *.MDL3, change the file name so it is just the controller name to remove any other extension. For example, a snapshot may save the name as ControllerName[20230131_022734], so change this to just ControllerName. Then save the mdl3 file in the same folder as the CCF downloaded before. \n\n4. Now back in DMC3 Builder | Controllers view, click Import > Import Application > choose the downloaded CCF. If there are any errors with import, you may need to open it in DMCplus Build first and fix any errors upon Save. \n\n5. This is an optional step if you see that the CCF does not have everything copied correctly like calculations, for example. Right-click on the controller name of the imported CCF and select \"Copy Tuning\". Make the primary application the CCF (tuning values to be copied FROM), and the secondary application the DMC3 snapshot of the controller taken from online (tuning values to be copied TO). This way you will have the controller model and configuration of the current controller and copy the tuning parameters from the CCF snapshot taken from Aspen Watch history. Continue work on the DMC3 snapshot with updated tuning. \n\n\nSolution #3 - Automate DMC3 Snapshots in Configure Online Server\n\nThis solution is more of a preemptive measure to have a historic snapshot ready when needed, by configuring automatic snapshots at a frequent interval. This is not a solution for taking a snapshot manually using Aspen Watch data, just from the history retained on the APC Online Server. \n\n1. On the APC Online Server, open the program called Configure Online Server\n2. Click on the \"History and Snapshots\" tab on top\n3. Here you can configure for automatic snapshots to be taken at the specified intervals. Make sure to also reduce the Snapshot Retention so that it does not take up too much disk space with saved snapshot files:\n\n\n4. Once the automatic snapshots have been taken and retained, when clicking Get Snapshot in DMC3 Builder project > Online section, they will show up in the list under \"Select Available\", to be directly imported in DMC3 Builder:", "keywords": "inspector, aspen, watch, data, history, snapshot, dmc3, rte, ccf, save, download, simulation", "reference": null}, {"id": "000101690", "problem_statement": "This article described the meaning of the \u201cDMCplus Use Only\" message that appears on DMCplus Build when the External Target Ranks are set up.", "solution": "This message will appear only when External Targets are configured on a CCF Application (DMCplus Build). By going to the rank list of the CCF file you will see a message for all MV and CV external Targets Ranks indicating that the rank is \u201cDMCplus use only\u201d. The message will appear either you use and mdl file or mdl3 file.\n\n\n\nThis message was originally set up when the application was set up for the use of SmartStep and was a separate product from DMCplus. The message basically refers to the fact that the External Ranks will be used only when the application is on CONTROL Mode.\n\nIn any case the message could be ignored as it wont affect the normal operation of the controller.", "keywords": "DMCplus, CCF, External Targets", "reference": null}, {"id": "000049729", "problem_statement": "When implementing Aspen Manufacturing and Execution System (MES) solution customers may install a firewall between the various Aspen applications.\nFirewall systems help prevent unauthorized access to computer resources. However, if a firewall is turned on but not correctly configured, it may block some ports, which may cause applications that rely on those ports for communication to stop working.\nThe purpose of this Knowledge Base article is to document all of the ports that need to be open when the web and/or Aspen InfoPlus.21 (IP.21) server is behind the firewall. There is also a possibility of having a firewall between the Aspen InfoPlus.21 server and the web server which presents additional challenges that need to be addressed.", "solution": "To positively determine if the issue at hand is caused by a firewall, it is suggested to first try logging in with an administrative account that can be properly authenticated by the firewall. Such an account should have the ability to effectively bypass all of the firewall restrictions.\nIf a user logged in with such an administrative account has no issues accessing various resources behind the firewall, then this is proof positive that the firewall is blocking some essential ports that must remain open for applications on the opposite sides of the firewall to be able to work together.\nThe next step should be for a user to log in to the IP.21 server with an IP.21 Admin account and run the RPC PortMapper application to identify the ports which need to be opened in the firewall. Note: RPC PortMapper application RPCINFO.EXE is located in the C:\\Program Files (x86)\\Common Files\\AspenTech Shared\\Portmapper directory.\nBelow is a list of Aspen MES applications and their port usage.\n Aspen InfoPlus.21\n\u00b7 IP.21 RPC DA Servers (TSK_ADMIN_SERVER, TSK_APEX_SERVER, TSK_EXCEL_SERVER, TSK_ORIG_SERVER, TSK_DEFAULT_SERVER, TSK_BATCH21_SERVER). Use six ports - typically 11111 - 11116 but these ports are dynamic and change every time IP.21 is restarted. Different (static) port numbers may be configured. See KB article # 104056 for more info. NOTE: In addition to ports for each RPC server task, port 111 also must be opened in the firewall. Port 111 is used by the NobleNet Portmapper for the initial API call.\n\u00b7 TSK_BGCSNET. Used by GCS and Aspen IP.21 Process Browser clients when making connections to an InfoPlus.21 database. On the InfoPlus.21 server, TSK_BGCSNET listens on a TCP/IP socket 10013 for any Aspen IP.21 Process Browser client trying to connect.\n\u00b7 TSK_SQL_SERVER. Uses port # 10014.\n\u00b7 TSK_ACCESS_SVC. Uses port # 20014.\n\u00b7 IP.21 Tag Replication. Aspen InfoPlus.21 Tag Replication uses Microsoft's Message Queueing System (MSMQ) as the transport in Windows Communication Framework (WCF). MSMQ requires the following communication ports to be open: 1801 (both TCP and UDP), 2101 (TCP), 2107 (TCP), 3527 (UDP), 2105 (TCP), 2103 (TCP), 135 (TCP). for more info see Aspen KB article # 130208 or the following MS KB article: https://support.microsoft.com/en-us/kb/178517 .\n\u00b7 Cim-IO client tasks.Beginning with V8.4 it is no longer required to define Aspen Cim-IO services and ports on the Cim-IO Client system. When a Cim-IO client task starts, it connects to the Cim-IO Manager service running on the Cim-IO Interface Server. The Cim-IO Manager returns the port numbers assigned to the Cim-IO services on the Cim-IO Server to the Cim-IO client task. The port used by the Cim-IO client task to communicate with the Cim-IO Manager service is defined by the service CIMIOManager in the Services file. The default port number is 7777. This port number must be the same on both the Cim-IO Server and Client machines. See KB article # 140636 for more info.\n\u00b7 IP21 OPCDA Server. Based on DCOM so port 135 is used. A fixed port can be specified by editing the registry. See article: https://support.microsoft.com/en-us/kb/217351 . Note: the App ID for IP21 OPCDA (IP21DA_Server.exe) is {32EE345D-A261-4B84-845F-44E61CBCE3FE}.\n\u00b7 IP21 OPCUA servers determine their ports from a configuration file. See the following file locations:\n UA Component\nConfiguration File Location\nXML Excerpt\nIP21 OPCUA Server\nC:\\ProgramData\\AspenTech\\InfoPlus.21\\db21\\group200\\ tsk_opcua_server.opcua.config.xml\n\n opc.tcp://localhost:63500/InfoPlus21/OpcUa/Server\n http://localhost:63501/InfoPlus21/OpcUa/Server\n\nAspen Process Simulator Service (UA server)\nC:\\Program Files (x86)\\AspenTech\\CIM-IO\\io\\cio_opc_uai\\ AspenProcessSimulator.Config.xml\n\n http://localhost:62551/Aspen/ProcessSimulator\n opc.tcp://localhost:62552/Aspen/ProcessSimulator\n\nOPC UA Discovery Service\nC:\\ProgramData\\OPC Foundation\\Config\\ Opc.Ua.DiscoveryServer.Config.xml\n\n opc.tcp://localhost:4840/UADiscovery\n http://localhost:52601/UADiscovery\n\n Aspen InfoPlus.21 Administrator and InfoPlus.21 Definition Editor\nAlthough specific port numbers can be assigned to the various Aspen InfoPlus.21 API server tasks to effect client communication through a firewall, the Aspen InfoPlus.21 Task Service is still dynamically allocated a new port number through the Noblenet Portmapper every time this service is restarted. Since the Aspen InfoPlus.21 Administrator uses the Aspen InfoPlus.21 Task Service to communicate with the Aspen InfoPlus.21 database, the port for the Aspen InfoPlus.21 Task Service needs to be opened through the firewall as well. For additional information on connecting the InfoPlus.21 Administrator tool across a firewall, please see solution id 000078782.\n Aspen Process Explorer\nCan use any of the RPC DA Server ports - typically 11111 - 11116. Different port numbers may be configured. See KB article # 104056 and 115049 for more info. NOTE: In addition to the ports assigned to each RPC server task, port 111 also must be opened in the firewall. Port 111 is used by the NoblenetPortmapper for the initial API call.\n aspenONE Process Explorer\n\u00b7 HTTP port 80.\n\u00b7 Port 111 for the PortMapper.\n\u00b7 Six other ports for the TSK_ External Services as identified in the IP.21 Manager section above must remain open. These six ports are dynamic and change every time IP.21 is restarted, or the individual Services are restarted. These six ports can be configured to be static so they don't change when the IP.21 Services are restarted.\n Aspen IP.21 Process Browser\n\u00b7 HTTP port 80.\n\u00b7 Port 111 for the PortMapper.\n\u00b7 TCP 10013 and 10014.\n\u00b7 Six other ports for the TSK_ External Services as identified in the IP.21 Manager section above must remain open.\n\u00b7 DCOM port 135 and a range of ports 3000 - 4000 must remain open. See KB article # 104040 for more info.\n Aspen SQLPlus\nUses port 10014. It is configurable using ADSA.\n AFW Security Server\nAFW Security Server is a web service thus it uses port 80 from IIS by default.\n Client applications using DCOM\nClient applications using Windows DCOM require port 135 and a range of ports 3000 - 4000 to be open.\n Aspen Calc\nUses port 135 and a range of ports 3000 - 4000. See KB article # 110537 for more info.\n Aspen Production Record Manager\n\u00b7 Client applications use TCP/UDP port 135 and a range of ports 3000 - 4000. See KB article # 115120 , 118957 and 104056 for more info.\n\u00b7 Aspen Production Record Manager Business Process Document Service uses port 7500. See KB article # 121971 for more info.\n\u00b7 APRM ODBC. Uses port 52011. It is configurable using ADSA.\n Aspen Production Execution Manager\nApache port 8080 and port 1433. See KB article # 128922 for more info.\nThe MOC client communicates with the APEM server using port 8888 by default, but if the 8888 is used by any other application, MOC will try to increase the port number by 1, e.g. 8889 is the next, until it finds a valid usable port. APEM server will utilize a random local port to connect to the remote 8888 port.\n Aspen Process Data Service\nUses port 52007. It is configurable using ADSA.\n Aspen Operations Reconciliation and Accounting\n\u00b7 Uses database-specific ODBC ports to connect to the database server.\n\u00b7 ORACLE as a relational database: SQL*Net 2: port 1521 (This is the default listener port. A listener port will always be used.). The following ports may be used (check with your site's DBA): LDAP: port 3060, LDAP SSL: port 3131, Oracle Notification Server: port 6200, Web Cache Invalidation: port 4001. See KB article # 116910 for more info.\n\u00b7 SQL Server as a relational database. See the following Microsoft KB article for more info: //support.microsoft.com/kb/287932/\n\u00b7 Aspen Advisor Connect. PI database communication port number is 5450. PHD database communication port number is 3000.\n Aspen Software License Manager (SLM)\n\u00b7 TCP/UDP Port 5093 and 5094. You also have to enable pinging to the license server.See KB article # 135484 for more info.\n\u00b7 Auto Upload Tool requires that the following ports are open: HTTPS PORT 443, Secure FTP (SFTP) port 22, SMTP Email port 25. See KB article # 000082940 for more info.", "keywords": null, "reference": null}, {"id": "000094168", "problem_statement": "How to Install a Standalone License Manually", "solution": "NOTE: Please attempt to use the License File Installer before trying to install the license file manually. The License Installer will install the license seamlessly and will let you know if there are any issues with your file.", "keywords": "slm\ndongle-free\ndongleless\ndongle\nstandalone\ndongle free\ndongle less\nconfiguration wizard", "reference": "the Software License Manager (SLM) License File Installer KB Article for instructions.\n\nTo manually install and configure your standalone license file for aspenONE V9 and higher; \nRename.delete any old and/or expired license files in the Aspentech Shared folder\n(64bit OS) C:\\Program Files (x86)\\Common Files\\Aspentech Shared and C:\\Program Files\\Common Files\\Aspentech Shared\n(32bit OS) C:\\Program Files\\Common Files\\Aspentech Shared\nRename/delete any old and/or expired *.slf or *.lic files in these folders to *.slf_old or *.lic_old\nCopy the license file you wish to install to:\n(64bit OS) C:\\Program Files (x86)\\Common Files\\Aspentech Shared and C:\\Program Files\\Common Files\\Aspentech Shared\n(32bit OS) C:\\Program Files\\Common Files\\Aspentech Shared\nThe license file must have a .SLF extension and its name must begin with \"lservrc_\", for example, lservrc_004_3BC4F.SLF. \n \nConfiguring your standalone license file:\nRun the SLM Configuration Wizard. Go to: Start | All Programs | AspenTech | Aspen SLM | aspenONE SLM License Manager, then click the Configure button to launch the SLM Configuration Wizard.\n\nNOTE: For some operating systems you may have to execute the aspenONE SLM License Manager using the 'Run as Administrator' option. The application writes entries into the Windows Registry, and Administrator privileges are required for this.\nOnce the SLM Configuration Wizard appears, expand the Advance Settings tab and verify that Ignore local keys is unchecked.\n\nClick the Apply Changes button.\nOnce the Show Details button appears the process is complete and the standalone is installed.\n\nRelated Articles\nVideo: How to install a standalone license"}, {"id": "000072608", "problem_statement": "How to install a network license file manually", "solution": "NOTE: Please attempt to use the License File Installer before trying to install the license file manually. The License Installer will install the license seamlessly and will let you know if there are any issues with your file.", "keywords": "installation, network, license", "reference": "the Software License Manager (SLM) License File Installer KB Article for instructions.\n\nTo install the license file for the Network license server manually;\n\n1. Copy the license file to the SLM license server directory. The default SLM license server directory is:\n\\Program Files (x86)\\Common Files\\SafeNet Sentinel\\Sentinel RMS License Manager\\WinNT\n\n2. Rename or delete any pre-existing license file(s).\n\n3. Rename the new license file to LSERVRC (no file extension).\n\n4. Locate the loadls.exe executable file in the license server directory:\n\\Program Files (x86)\\Common Files\\SafeNet Sentinel\\Sentinel RMS License Manager\\WinNT\n\n5. Run loadls.exe (Run as Administrator) and select Remove to stop the license server.\n\n6. Run loadls.exe (Run as Administrator) again and select Add to start the license server.\n\n7. You can verify that the new license is active by using the license profiler. \nPlease review the following article: What is the SLM License Profiler and How to Use It? for information on how to use the SLM License Profiler.\nNote: Ensure that you reload the license server whenever you change your license file.\n\nRelated Articles\nVideo: How to install a network license"}, {"id": "000100597", "problem_statement": "This Knowledge Base article provides possible steps to resolve the issue of QP internal error when simulating DMC3 application which causes the controller to switch off. This error message is generic to indicate that the control engine failed to execute a solution to the control problem.", "solution": "One common cause for this issue is near-collinearity or high RGA problems. Check for any collinearity issues in the controller and try to reduce max RGA below 10. \n\nIf that does not work, check the settings for the called Target Algorithm Type (EPSMVPMX) that is used to configure the steady-state solver. The options are as follows:\n1 (obsolete) now functions the same as option 2\n2 Legacy interior point QP algorithm\n3 Legacy interior point QP algorithm and generate a debug file at each cycle - don't use for long as this will fill up your disk\n4 Active set method - very robust but can be slow for Composite-size problems. Shadow prices are calculated for both LP and QP subproblems.\n5 New interior point method for both LP and QP subproblems. No shadow prices are calculated.\n6 New interior point method - switch to active set for the objective function rank. The motivation for the type 6 is that this enables shadow prices.\n7 Not used\n8 Use interior point for the objective function rank if number of variables is greater than 300. Otherwise use active set. This is recommended for Composite controllers only.\n14-18 Same as options 4-8 with the automatic creation of a binary file to C:\\ProgramData\\AspenTech\\APC\\Online\\sys\\etc\\lpqp_no_error_file_1.bin. The binary file is for AspenTech use only.\n\n\nFor traditional DMCplus controllers, option 2 (legacy interior point method) was the default. However, this is not always robust for larger and more complex DMC3 controllers. Therefore, this should be changed to Option 4 (active set method), which will be robust enough for most applications. \n\nIf the QP Error continues, try implementing Option 5 or 6. \n\nFor very large controllers (with 300+ variables) or those applications participating in a composite application, use Option 8. \n\n\nEPSMVPMX can be found in DMC3 Builder \n\nIf you are using DMC3 Builder, go to Simulation, then on the top ribbon go to application details > on application details go to the optimizer section, and locate the parameter called Target Algorithm Type, this is the same as EPSMVPMX here you can change the option:\n\n\n\nEPSMVPMX can be found on DMCplus Build: \nIf you are using a CCF file then open DMCplus Build then go Configure and locate the entry by the name EPSMVPMX it should be in the middle of the list.\n\n\n\nNote: the parameter can only be changed from the desktop application and cannot be accessed from PCWS. So, for the change to take place you have to redeploy the application.\n\nFurther Troubleshooting:\n\nIf using APC V12.1, make sure to have the latest patches applied. Starting in patch V12.1 CP2, there were defect fixes for QP errors. Applying the latest patches may help mitigate this error message. \n\nWhen there is an error in the optimization, an lpqp binary file (*.BIN) will be saved to the following locations:\n\nFor ACO: C:\\ProgramData\\AspenTech\\APC\\Online\\sys\\etc\\lpqp_no_error_file_1.bin\nFor RTE: C:\\ProgramData\\AspenTech\\RTE\\Vxx\\Clouds\n\nThese files allow AspenTech internal resources to check the controller's engine calculation. The disk will not be filled up, as only the last 25 files are saved. The binary file is also saved if the optimization takes more than 20 seconds. If further troubleshooting is required as the solution above did not work, please send these files through a Support ticket.", "keywords": "EPSMVPMX, QP internal error, steady-state solver, DMC3 Builder, DMCplus", "reference": null}, {"id": "000101370", "problem_statement": "You cannot see your folder structure under your IP.21 server node in Tag Browser. You have already tried these solutions: Why does the Aspen Tag Browser not show any folders while the description field is simultaneously grayed out? https://esupport.aspentech.com/S_Article?id=000068816 and Aspen Tag Browser will not display folders https://esupport.aspentech.com/S_Article?id=000096201 , but you still cannot see the folders displayed under the database server node like this:", "solution": "Open the Application Log by going to View > Application Log and you may find a message like this: LoadFolders: ProcedureDef (TagBrowserFolderSearch) does not exist. Use original query.\n\n\n\nFirst, confirm that the ProcedureDef record TagBrowserFolderSearch exists in your IP.21 database. If it does not exist, you can create it by running the query provided in the attached file, CreateProc_TagBrowserFolderSearch.txt. Then try launching Tag Browser again.\n\nIf the TagBrowserFolderSearch record does exist, or you create the record and you still see this error in the Application Log in Tag Browser, enable logging for Tag Browser using the Process Data Administrator. \n\nTo enable logging for Tag Browser, go to this directory: \\Program Files (x86)\\AspenTech\\ProcessData and then launch the ProcessDataAdministrator executable as an Admin.\n\nGo to the Logging tab and select TagBrowser from the dropdown. \n\nCheck the box for Calls and then hit Apply.\n\n\n\nRe-open Tag Browser (you do not need to execute a search) and then go to this directory: \\ProgramData\\AspenTech\\DiagnosticLogs\\ProcessData to find the log file: TagBrowser.TagBrowser.x86..\n\nGo back to the Process Data Administrator and make sure to select the \"Turn OFF ALL Log Flags\"\n\nOpen the log file and you may see error messages like:\nCalls ADODB::Recordset::Open SQL( TagBrowserFolderSearch(1) )\nCalls ADODB::Recordset::Open( E_FAIL (0x80004005) )\n\nCalls ADODB::Recordset::Open SQL( (select name parent width 256, record_name child width 256, 'view' type from folderdef where record_name->definition = 'textviewdef' or record_name->definition = 'viewdef') Union (select folder_parent parent, name child width 256, 'folder' type from folderdef where name <> 'Distribution Folder' and folder_parent <> 'Distribution Folder' and folder_parent not like '%.RLD folder' ) order by parent, child )\nCalls ADODB::Recordset::Open( E_FAIL (0x80004005) )\n\nOpen SQLplus and copy the Select query from the Open SQL call into a new query and try to execute it. You may get an error like this: Invalid RECORD value: 'Distribution Folder'\n\n\nIf you saw those error messages in the Tag Browser diagnostic log and you saw that message in SQLplus, this means the issue with Tag Browser not displaying your folders is due to the folder \"Distribution Folder\" not existing in the database. The Distribution Folder contains folders with fields and records from recloads. The folder is created when IP.21 is first set up. The folder and its contents are not needed for IP.21 to function properly, except for Tag Browser. \n\nTag Browser deliberately excludes the Distribution Folder from the results/tree view based on that query in the ProcedureDef record. If you removed the folder or renamed it, create a new folder in IP.21 and give it the name Distribution Folder. Then reopen Tag Browser, and you should be able to see your folder structure under your database server node.", "keywords": "Tag Browser\nFolders\nDistribution Folder\nTagBrowserFolderSearch", "reference": null}, {"id": "000101672", "problem_statement": "What should I do if I upgrade Microsoft Office from 32-bit to 64-bit and the Aspen Report Writer stops working?", "solution": "It is always recommended to install or upgrade Microsoft Office before AspenTech products installation. When you install or upgrade Microsoft Office from 32-bit to 64-bit applications and Aspen Report Writer has already been installed in the machine then it will lose connection for its Excel Library Path i.e., Add-In file, and Aspen Report Writer will stop working. In this situation, we recommend the following steps:\n\u201cRepair\u201d Aspen Report Writer by using the AspenTech installation media file.\nOpen Excel and check that the ARW add-in can be added.\nCheck AspenRpt8 -> Function wizard working properly.", "keywords": "Microsoft Office, 32-bit, 64-bit, Add-In, Aspen Report Writer", "reference": null}, {"id": "000101350", "problem_statement": "I want to make a mixture of citric acid and water. Citric acid doesn't have any reactions listed. The web says that a 1 molal mixture should have a pH of about 3.2 how do I get this to work?", "solution": "The electrolyte wizard does not have the citric acid anions; however, it is possible to create them and add the equilibrium reactions.\n The method is similar to this knowledge document: An example on how to enter a non-databank ion and a non-databank volatile electrolyte.\nThe attached file can be opened in V12.1 and higher.\nCitric acid is a weak acid that deprotonates in 3 steps.\n\n\nCredit: CrystEngComm, 2014,16, 3387-3394\n\nThese three anions can be added as components on the Components | Specifications | Selection sheet.\nThe formula is entered on the Components | Molecular Structure | Formula sheet.\nComponent ID Formula Charge\nCITRICAC Citric Acid C6H8O7 0\nCIT-1 Citric Acid Anion with Charge -1 C6H7O7 -1\nCIT-2 Citric Acid Anion with Charge -2 C6H6O7 -2\nCIT-3 Citric Acid Anion with Charge -3 C6H5O7 -3\n\nThe equilibrium reactions are added to the Chemistry to the water dissociation reaction set up with the electrolyte wizard.\n2 Equilibrium CITRICAC + H2O <--> CIT-1 + H3O+\n3 Equilibrium CIT-1 + H2O <--> CIT-2 + H3O+\n4 Equilibrium CIT-2 + H2O <--> CIT-3 + H3O+\n\n\n\n\nThe parameters needed for the ions are CHARGE, IONTYP (3=oxyanion), DGAQFM (Gibbs free energy of formation), and DHAQFM (Enthalpy of formation). The Gibbs free energy of formation can be adjusted to fit the equilibrium constants i.e. the pH.\nAs an estimate, the Gibbs free energy of formation and Enthalpy of formation of Citric Acid were used for the ions.\n Parameters Units Component Component Component\n CIT-3 CIT-2 CIT-1\nCHARGE -3 -2 -1\nIONTYP 3 3 3\nDGAQFM Btu/lbmol -486242 -486242 -486242\nDHAQFM Btu/lbmol -578676 -578676 -578676\n\nIons are non-volatile, so the first element is the Antoine vapor pressure parameter (PLXANT) was set to -1e20. See this knowledge document for more information about creating a non-volatile component: What parameters are needed to create a non-volatile component?.\n\nThe pH of a 1 mM solution of citric acid will be about 3.2. To match this value, the DGAQFM can be adjusted. With a value of -490000 Btu/lbmol, the pH is 3.25.", "keywords": null, "reference": null}, {"id": "000062184", "problem_statement": "GDOT can present different", "solution": "Status, please find a quick description on the table below:\n\n\n \nHowever, is not mentioned what could a Solution Status 4 could mean. This solution frames a description of what the solution Status 4 stand for.\n \nSolution\nA SolutionStatus of 4 in GDOT usually means \"Skipped\" which means that the execution cycle skipped. Typically this is because it is waiting for some I/O which wasn't updated yet, or in the case of GDOT DR because Model Update is running and hasn't provided new gains yet. Either way, it can sometimes be resolved by changing the different GDOT execution offsets and making sure that the machine has adequate resources for running the different GDOT applications.", "keywords": "GDOT,", "reference": null}, {"id": "000101365", "problem_statement": "When running a simulation with Aspen Properties for physical property calculation, a large history file (*.his) is created in the working folder. How to avoid this?", "solution": "Change the diagnostics message level to 0 for history file in the Aspen Properties/Aspen Plus file.", "keywords": null, "reference": null}, {"id": "000101363", "problem_statement": "In my simulation, I have some algebraic variables for which I'm expecting a step change after changing a fixed variable that is used in their calculations. However, I observe a delay/slope in the change. Why? Can this be resolved?", "solution": "Let's review this using some simple example.\n\nExample 1: model run time condition\nModel example\n x as realvariable;\n if time < 1 then\n x = 1;\n else\n x = 2;\n endif\n y as realvariable (initial, 0);\n $y = x;\nEnd\nTo exacerbate the problem, we have specified the communication interval (in Run, Run Options) to 0.1hr.\n\n\n\nIt looks as if x is changed using a linear ramp, and the value of 2 would be reached only a 1.1hr. However... you can see that y slope is changed at 1hr. Also the expected value for y at time 2 is 3 (because during 1hr, x = 1, then during another 1hr x = 2, so 1 + 2 = 3). That's exactly the value of y at time 2. If you activate the \"locate model discontinuties\" in the integrator settings (Run menu, Solver Options..., Integrator tab), the value of y is exactly 3 (because the integrator with change the value of x exactly at time 1).\n\n\n\nWith the option off, it changes x a little bit too late (at the end of the integration step - which is actually much shorter because the integrator detects something fishy (x jumping from 1 to 2) and cuts the integration step to respect error of integration.\n\n\n\nSimulation Messages:\nStep 12: Time = 1.2875, step = 0.43249 rejected, error/tol = 13989\nStep 13: Time = 0.9631, step = 0.10812 accepted, error/tol = 0\nIntegrating from 0.9 to 1\nStep 14: Time = 1.1253, step = 0.16218 rejected, error/tol = 4956.9\nStep 15: Time = 1.0036, step = 0.040546 rejected, error/tol = 563.29\nStep 16: Time = 0.97323, step = 0.010136 accepted, error/tol = 0\nStep 17: Time = 0.98844, step = 0.015205 accepted, error/tol = 0\nStep 18: Time = 1.0112, step = 0.022807 rejected, error/tol = 688.19\nStep 19: Time = 0.99414, step = 0.0057017 accepted, error/tol = 0\nStep 20: Time = 1.0027, step = 0.0085526 rejected, error/tol = 257.33\nStep 21: Time = 0.99628, step = 0.0021382 accepted, error/tol = 0\nStep 22: Time = 0.99949, step = 0.0032072 accepted, error/tol = 0\nStep 23: Time = 1.0043, step = 0.0048108 rejected, error/tol = 144.36\nStep 24: Time = 1.0007, step = 0.0012027 rejected, error/tol = 16.405\nStep 25: Time = 1.0005, step = 0.001 accepted (at min), error/tol = 11.887\nIntegrating from 1 to 1.1\nStep 26: Time = 1.0015, step = 0.001 accepted (at min), error/tol = 0\nStep 27: Time = 1.003, step = 0.0015 accepted, error/tol = 6.6497e-12\nStep 28: Time = 1.0052, step = 0.00225 accepted, error/tol = 0\n\n With option \"Locate model discontinuties\" on, it will step at 1hr:\nStep 11: Time = 0.85498, step = 0.28833 accepted, error/tol = 0\nIntegrating from 0.6 to 0.7\nIntegrating from 0.7 to 0.8\nIntegrating from 0.8 to 0.9\nStep 12: Time = 1.2875, step = 0.43249 accepted, error/tol = 0\nIntegrating from 0.9 to 1\nModel discontinuity detected between 0.9 and 1\nModel discontinuity located between 0.999994 and 1, after 14 bisections\nThe following event(s) have changed state:\nB1.AM_If1\nIntegrating from 1 to 1.1\nRestarting integrator at 1\nStep 13: Time = 1.005, step = 0.005 rejected, error/tol = 125\nStep 14: Time = 1.0012, step = 0.00125 rejected, error/tol = 12.5\nStep 15: Time = 1.001, step = 0.001 accepted (at min), error/tol = 8.3333\n\nIf you enable also the \"reinitialize after discontinuties\", the integration is actually cleaner:\nIntegrating from 0.8 to 0.9\nStep 12: Time = 1.2875, step = 0.43249 accepted, error/tol = 0\nIntegrating from 0.9 to 1\nModel discontinuity detected between 0.9 and 1\nModel discontinuity located between 0.999994 and 1, after 14 bisections\nThe following event(s) have changed state:\nB1.AM_If1\nRe-Initializing at time 1\nDecomposition:\n Total number of equations = 2\n Total number of groups = 2\n Number of explicit groups = 2\n Largest group size = 1\nEquation B1.AM_If1 moved from THEN branch to ELSE branch\nIntegrating from 1 to 1.1\nStep 13: Time = 1.005, step = 0.005 accepted, error/tol = 0\nStep 14: Time = 1.0125, step = 0.0075 accepted, error/tol = 0\nStep 15: Time = 1.0237, step = 0.01125 accepted, error/tol = 0\nStep 16: Time = 1.0406, step = 0.016875 accepted, error/tol = 0\n\nExample 2\nThis second example uses a task, which is explicit in time.\n\nModel\nModel test\n x as realvariable (fixed);\n y as realvariable (initial, 0);\n x2 as realvariable;\n\n x2 = x;\n $y = x2;\nend\nTask\nTask change_x runs at 1\n B1.x : 2;\nEnd\n\nAt time 1hr, the task change_x changes B1.x to 2.\n\nLet's run it with all 3 options enabled.\n\n\n\nYou can see on the plot xy that x and x2 are just identical.\n\n\n\nIf you uncheck the option \"re-initialize after variable step-change\", then you will see x2 change appears a bit later (communication interval is 0.1hr, set large to exacerbate the effect) although for the calculations it is actually changed pretty much exactly at time 1hr as a step (you can compute the integral to check that: y = 3).\n\n\n\nA bit of theory\n\nThere are 2 kinds of time discontinuities: explicit in time such as in task runs at